Search Results

Search found 4483 results on 180 pages for 'embedded fonts'.

Page 179/180 | < Previous Page | 175 176 177 178 179 180  | Next Page >

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • WLS MBeans

    - by Jani Rautiainen
    WLS provides a set of Managed Beans (MBeans) to configure, monitor and manage WLS resources. We can use the WLS MBeans to automate some of the tasks related to the configuration and maintenance of the WLS instance. The MBeans can be accessed a number of ways; using various UIs and programmatically using Java or WLST Python scripts.For customization development we can use the features to e.g. manage the deployed customization in MDS, control logging levels, automate deployment of dependent libraries etc. This article is an introduction on how to access and use the WLS MBeans. The goal is to illustrate the various access methods in a single article; the details of the features are left to the linked documentation.This article covers Windows based environment, steps for Linux would be similar however there would be some differences e.g. on how the file paths are defined. MBeansThe WLS MBeans can be categorized to runtime and configuration MBeans.The Runtime MBeans can be used to access the runtime information about the server and its resources. The data from runtime beans is only available while the server is running. The runtime beans can be used to e.g. check the state of the server or deployment.The Configuration MBeans contain information about the configuration of servers and resources. The configuration of the domain is stored in the config.xml file and the configuration MBeans can be used to access and modify the configuration data. For more information on the WLS MBeans refer to: Understanding WebLogic Server MBeans WLS MBean reference Java Management Extensions (JMX)We can use JMX APIs to access the WLS MBeans. This allows us to create Java programs to configure, monitor, and manage WLS resources. In order to use the WLS MBeans we need to add the following library into the class-path: WL_HOME\lib\wljmxclient.jar Connecting to a WLS MBean server The WLS MBeans are contained in a Mbean server, depending on the requirement we can connect to (MBean Server / JNDI Name): Domain Runtime MBean Server weblogic.management.mbeanservers.domainruntime Runtime MBean Server weblogic.management.mbeanservers.runtime Edit MBean Server weblogic.management.mbeanservers.edit To connect to the WLS MBean server first we need to create a map containing the credentials; Hashtable<String, String> param = new Hashtable<String, String>(); param.put(Context.SECURITY_PRINCIPAL, "weblogic");        param.put(Context.SECURITY_CREDENTIALS, "weblogic1");        param.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); These define the user, password and package containing the protocol. Next we create the connection: JMXServiceURL serviceURL =     new JMXServiceURL("t3","127.0.0.1",7101,     "/jndi/weblogic.management.mbeanservers.domainruntime"); JMXConnector connector = JMXConnectorFactory.connect(serviceURL, param); MBeanServerConnection connection = connector.getMBeanServerConnection(); With the connection we can now access the MBeans for the WLS instance. For a complete example see Appendix A of this post. For more details refer to Accessing WebLogic Server MBeans with JMX Accessing WLS MBeans The WLS MBeans are structured hierarchically; in order to access content we need to know the path to the MBean we are interested in. The MBean is accessed using “MBeanServerConnection. getAttribute” API.  WLS provides entry points to the hierarchy allowing us to navigate all the WLS MBeans in the hierarchy (MBean Server / JMX object name): Domain Runtime MBean Server com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean Runtime MBean Servers com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean Edit MBean Server com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean For example we can access the Domain Runtime MBean using: ObjectName service = new ObjectName( "com.bea:Name=DomainRuntimeService," + "Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean"); Same syntax works for any “child” WLS MBeans e.g. to find out all application deployments we can: ObjectName domainConfig = (ObjectName)connection.getAttribute(service,"DomainConfiguration"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); Alternatively we could access the same MBean using the full syntax: ObjectName domainConfig = new ObjectName("com.bea:Location=DefaultDomain,Name=DefaultDomain,Type=Domain"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); For more details refer to Accessing WebLogic Server MBeans with JMX Invoking operations on WLS MBeans The WLS MBean operations can be invoked with MBeanServerConnection. invoke API; in the following example we query the state of “AppsLoggerService” application: ObjectName appRuntimeStateRuntime = new ObjectName("com.bea:Name=AppRuntimeStateRuntime,Type=AppRuntimeStateRuntime"); Object[] parameters = { "AppsLoggerService", "DefaultServer" }; String[] signature = { "java.lang.String", "java.lang.String" }; String result = (String)connection.invoke(appRuntimeStateRuntime,"getCurrentState",parameters, signature); The result returned should be "STATE_ACTIVE" assuming the "AppsLoggerService" application is up and running. WebLogic Scripting Tool (WLST) The WebLogic Scripting Tool (WLST) is a command-line scripting environment that we can access the same WLS MBeans. The tool is located under: $MW_HOME\oracle_common\common\bin\wlst.bat Do note that there are several instances of the wlst script under the $MW_HOME, each of them works, however the commands available vary, so we want to use the one under “oracle_common”. The tool is started in offline mode. In offline mode we can access and manipulate the domain configuration. In online mode we can access the runtime information. We connect to the Administration Server : connect("weblogic","weblogic1", "t3://127.0.0.1:7101") In both online and offline modes we can navigate the WLS MBean using commands like "ls" to print content and "cd" to navigate between objects, for example: All the commands available can be obtained with: help('all') For details of the tool refer to WebLogic Scripting Tool and for the commands available WLST Command and Variable Reference. Also do note that the WLST tool can be invoked from Java code in Embedded Mode. Running Scripts The WLST tool allows us to automate tasks using Python scripts in Script Mode. The script can be manually created or recorded by the WLST tool. Example commands of recording a script: startRecording("c:/temp/recording.py") <commands that we want to record> stopRecording() We can run the script from WLST: execfile("c:/temp/recording.py") We can also run the script from the command line: C:\apps\Oracle\Middleware\oracle_common\common\bin\wlst.cmd c:/temp/recording.py There are various sample scripts are provided with the WLS instance. UI to Access the WLS MBeans There are various UIs through which we can access the WLS MBeans. Oracle Enterprise Manager Fusion Middleware Control Oracle WebLogic Server Administration Console Fusion Middleware Control MBean Browser In the integrated JDeveloper environment only the Oracle WebLogic Server Administration Console is available to us. For more information refer to the documentation, one noteworthy feature in the console is the ability to record WLST scripts based on the navigation. In addition to the UIs above the JConsole included in the JDK can be used to access the WLS MBeans. The JConsole needs to be started with specific parameter to force WLS objects to be used and jar files in the classpath: "C:\apps\Oracle\Middleware\jdk160_24\bin\jconsole" -J-Djava.class.path=C:\apps\Oracle\Middleware\jdk160_24\lib\jconsole.jar;C:\apps\Oracle\Middleware\jdk160_24\lib\tools.jar;C:\apps\Oracle\Middleware\wlserver_10.3\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote For more details refer to the Accessing Custom MBeans from JConsole. Summary In this article we have covered various ways we can access and use the WLS MBeans in context of integrated WLS in JDeveloper to be used for Fusion Application customization development. References Developing Custom Management Utilities With JMX for Oracle WebLogic Server Accessing WebLogic Server MBeans with JMX WebLogic Server MBean Reference WebLogic Scripting Tool WLST Command and Variable Reference Appendix A package oracle.apps.test; import java.io.IOException;import java.net.MalformedURLException;import java.util.Hashtable;import javax.management.MBeanServerConnection;import javax.management.MalformedObjectNameException;import javax.management.ObjectName;import javax.management.remote.JMXConnector;import javax.management.remote.JMXConnectorFactory;import javax.management.remote.JMXServiceURL;import javax.naming.Context;/** * This class contains simple examples on how to access WLS MBeans using JMX. */public class BlogExample {    /**     * Connection to the WLS MBeans     */    private MBeanServerConnection connection;    /**     * Constructor that takes in the connection information for the      * domain and obtains the resources from WLS MBeans using JMX.     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     */    public BlogExample(String hostName, String port, String userName,                       String password) {        super();        try {            initConnection(hostName, port, userName, password);        } catch (Exception e) {            throw new RuntimeException("Unable to connect to the domain " +                                       hostName + ":" + port);        }    }    /**     * Default constructor.     * Tries to create connection with default values. Runtime exception will be     * thrown if the default values are not used in the local instance.     */    public BlogExample() {        this("127.0.0.1", "7101", "weblogic", "weblogic1");    }    /**     * Initializes the JMX connection to the WLS Beans     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     * @throws IOException error connecting to the WLS MBeans     * @throws MalformedURLException error connecting to the WLS MBeans     * @throws MalformedObjectNameException error connecting to the WLS MBeans     */    private void initConnection(String hostName, String port, String userName,                                String password)                                 throws IOException, MalformedURLException,                                        MalformedObjectNameException {        String protocol = "t3";        String jndiroot = "/jndi/";        String mserver = "weblogic.management.mbeanservers.domainruntime";        JMXServiceURL serviceURL =            new JMXServiceURL(protocol, hostName, Integer.valueOf(port),                              jndiroot + mserver);        Hashtable<String, String> h = new Hashtable<String, String>();        h.put(Context.SECURITY_PRINCIPAL, userName);        h.put(Context.SECURITY_CREDENTIALS, password);        h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,              "weblogic.management.remote");        JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);        connection = connector.getMBeanServerConnection();    }    /**     * Main method used to invoke the logic for testing     * @param args arguments passed to the program     */    public static void main(String[] args) {        BlogExample blogExample = new BlogExample();        blogExample.testEntryPoint();        blogExample.testDirectAccess();        blogExample.testInvokeOperation();    }    /**     * Example of using an entry point to navigate the WLS MBean hierarchy.     */    public void testEntryPoint() {        try {            System.out.println("testEntryPoint");            ObjectName service =             new ObjectName("com.bea:Name=DomainRuntimeService,Type=" +"weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");            ObjectName domainConfig =                (ObjectName)connection.getAttribute(service,                                                    "DomainConfiguration");            ObjectName[] appDeployments =                (ObjectName[])connection.getAttribute(domainConfig,                                                      "AppDeployments");            for (ObjectName appDeployment : appDeployments) {                String resourceIdentifier =                    (String)connection.getAttribute(appDeployment,                                                    "SourcePath");                System.out.println(resourceIdentifier);            }        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of accessing WLS MBean directly with a full reference.     * This does the same thing as testEntryPoint in slightly difference way.     */    public void testDirectAccess() {        try {            System.out.println("testDirectAccess");            ObjectName appDeployment =                new ObjectName("com.bea:Location=DefaultDomain,"+                               "Name=AppsLoggerService,Type=AppDeployment");            String resourceIdentifier =                (String)connection.getAttribute(appDeployment, "SourcePath");            System.out.println(resourceIdentifier);        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of invoking operation on a WLS MBean.     */    public void testInvokeOperation() {        try {            System.out.println("testInvokeOperation");            ObjectName appRuntimeStateRuntime =                new ObjectName("com.bea:Name=AppRuntimeStateRuntime,"+                               "Type=AppRuntimeStateRuntime");            String identifier = "AppsLoggerService";            String serverName = "DefaultServer";            Object[] parameters = { identifier, serverName };            String[] signature = { "java.lang.String", "java.lang.String" };            String result =                (String)connection.invoke(appRuntimeStateRuntime, "getCurrentState",                                          parameters, signature);            System.out.println("State of " + identifier + " = " + result);        } catch (Exception e) {            throw new RuntimeException(e);        }    }}

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • CodePlex Daily Summary for Saturday, October 29, 2011

    CodePlex Daily Summary for Saturday, October 29, 2011Popular Releasespatterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Folder Bookmarks: Folder Bookmarks 2.2.0.1: In this version: Custom Icons - now you can change the icons of the bookmarks. By default, whenever an image is added, the icon is automatically changed to a thumbnail of the picture. This can be turned off in the settings (Options... > Settings) Ability to remove items from the 'Recent' category Bugfixes - 'Choose' button in 'Edit Bookmark' now works Another bug fix: another problem in the 'Edit Bookmark' windowMedia Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.Image Converter: Image Converter 0.3: New Features: - English and German support Technical Improvements: - Microsoft All Rules using Code Analysis Planned Features for future release: 1. Unit testing 2. Command line interface 3. Automatic UpdatesAcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...DotNetNuke® Events: 05.02.01: This release fixes any know bugs from any previous version. Events 05.02.01 will work for any DNN version 5.5.0 and up. Full details on the changes can be found at http://dnnevents.codeplex.com/workitem/list/basic Please review and rate this release... (stars are welcome)BUG FIXESAdded validation around category cookie RSS feed was missing an explicit close of the file when writing. Fixed. Added extra security into detail view .ICS Files did not include correct line folding. Fixed Cha...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.33: Add JSParser.ParseExpression method to parse JavaScript expressions rather than source-elements. Add -strict switch (CodeSettings.StrictMode) to force input code to ECMA5 Strict-mode (extra error-checking, "use strict" at top). Fixed bug when MinifyCode setting was set to false but RemoveUnneededCode was left it's default value of true.Path Copy Copy: 8.0: New version that mostly adds lots of requested features: 11340 11339 11338 11337 This version also features a more elaborate Settings UI that has several tabs. I tried to add some notes to better explain the use and purpose of the various options. The Path Copy Copy documentation is also on the way, both to explain how to develop custom plugins and to explain how to pre-configure options if you're a network admin. Stay tuned.MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Improved the AcceptViewHintAttribute controller filter. Now a client can specify not only the name of a View or Partial View it prefers, but also to receive just the rough data in Json format. Fixed: Issue with partial thrust Cl...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngDevForce Application Framework: DevForce AF 2.0.3 RTW: PrerequisitesWPF 4.0 Silverlight 4.0 DevForce 2010 6.1.3.1 Download ContentsDebug and Release Assemblies API Documentation Source code License.txt Requirements.txt Release HighlightsNew: EventAggregator event forwarding New: EntityManagerInterceptor<T> to intercept EntityManger events New: IHarnessAware to allow for ViewModel setup when executed inside of the Development Harness New: Improved design time stability New: Support for add-in development New: CoroutineFns.To...NicAudio: NicAudio 2.0.5: Minor change to accept special DTS stereo modes (LtRt, AB,...)NDepend TFS 2010 integration: version 0.5.0 beta 1: Only the activity and the VS plugin are avalaible right now. They basically work. Data types that are logged into tfs reports are subject to change. This is no big deal since data is not yet sent into the warehouse.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...DotNetNuke® FAQ: 05.00.00: FAQ (Frequently Asked Questions) 05.00.00 will work for any DNN version 5.6.1 and up. It is the first version which is rewritten in C#. The scope of this update is to fix all known issues and improve user interface. Please review and rate this release... (stars are welcome)BUG FIXESManage Categories button text was not localized Edit/Add FAQ Entry: button text was not localized ENHANCEMENTSAdded an option to select the control for category display: Listbox with checkboxes (flat category ...SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconNew ProjectsAsynk: Asynk is a framework/application that allows existing applications to easily be extended with an offloaded asynchronous worker layer. Asynk is developed using C#.Blob Tower Defense: 3D tower defense game for Windows Phone 7. School project for Brno University of Technology, computer graphics class.Booz: Booz is... An extended version of the boo shell (booish2 to be precise). Offers additional commands like cd, md, ls etc. I hope this shell can be used to take the position of/surpass the native windows shell in the near future.CIMS: a sanction infomation system for sencience and technology of hustCrystalDot - Icon Collection / Pack (LGPL): .Net / Mono freundliche Varainte der Crystal-Icons von Everaldo Icon collection / pack for .NET and Mono designed by Everaldo - KDE style http://www.everaldo.com/crystal/dotetes: dotetes adalah teka teki silang tool dikembangkan dengan bahasa c#Emoe': This Project is a Windows Phone 7.1 application.Equation Inversion: Visual Studion 2008 Add-in for equation inversions.Exploring VMR Features on WEC7: This is the sample application helps you to do alpha blending the bitmap on camera streaming in Windows Embedded Compact 7 using Directshow video Renderer (VMR). It is a VS2008 based smart device project developed on C++. I have explained the sample application in the following blog link. http://www.e-consystems.com/blog/windowsce/?p=759 EzValidation: Custom validation extensions for ASP.NET MVC 3. Includes server and client side model based validation attributes for: -- Equal To -- Not Equal To -- Greater Than -- Greater Than or Equal To -- Less Than -- Less Than or Equal To Supports validating against: -- Another Model Field -- A Specific Value -- Current Date/Yesterday/Tomorrow (for Dates and Strings) Download & Install via NuGet "package-install ezvalidation"Flu.net: Flu.net is a tool that helps you creating your own fluent syntax for .NET Framework applications in a declarative fashion. It is aimed for infrastructures and other open-source projects use.For Chess Endgames: King vs. King Opposition Calculator: You must input the locations of 2 kings on a chessboard, and whose turn it is to move. The calculator will display which king has the opposition, and how it can be used or maintained.GameTrakXNA: This project aims to create a simple library to use the unique GameTrak controller within XNA and Flash.Google Speech Recognition Example: Google Speech Recognition contains a working example of application that uses google speech recognition API. App contains all necessary dlls to record, decode and send your voice request to google service and recieve a text representation of what you've said. It's developed in C#Interval Mandelbrot Explorer: Explore the Mandelbrot set using interval arithmetic.ISD training tasks: ISD training examples and tasksiTunesControlBar: The iTunesControlBar helps user control their iTunes Application while it is minimized. iTunesControlBar resides at the top of the screen, invisible when not used, and allows playback and volume control, library searches and media information without the need to bring up iTunes.iTurtle: A bunch of Powerscripts to automate server management in AD environment.M26WC - Mono 2.6 Wizard Control: Wizard which runs under Mono2.6 A fork of: http://aerowizard.codeplex.com/Microsoft Help Viewer 2: Help Viewer 2 is the help runtime for both Visual Studio 11 help and Windows 8 help. The code in this project will help you use and understand the HV2 runtime API.MONTRASEC: Monitoring Trafficking in human beings and Sexual Exploitation of Children: benchmarking for member state and EU reporting, turning the SIAMSECT templates into a user-friendly interface and reporting tool. MTF.NET Runtime: Managed Task Framework .NET Runtime The MTF.NET runtime software and resulting assemblies are required to run applications built using the Managed Task Framework.NET Professional (Visual Studio 2010 extension) software design editor. The MTF.NET team are committed to continuously improving the core MTF.NET runtime and ensuring it is always available free and fully transparent. Pandoras Box: A greenfield inversion of control project utilising the power and flexibility of expressions and preferring convention over configuration.Pass the Puzzle: Pass the Puzzle is a frantic word-guessing party game. The game displays a few letters, and the players must come up with words containing those letters. But beware: if the timer goes off, you lose! It is based on the folk party game Pass the Parcel and is written in C#.PerCiGal: Percigal is a project for the development of applications for managing your personal media library. It consists in - a windows application to use at home to catalog movies, TV series, cast and books, with the support of the Internet for information retrieval; - a web interface for viewing and cataloging everywhere your media; - an application for smartphones. Project Flying Carpet: Este jogo é um projeto para a cadeira Projeto de Jogos: Motores Jogos do curso de Jogos Digitais da Unisinos.proxy browser: sed leo Latin's Butterfly....Python Multiple Dispatch: Multiple dispatch (AKA multimethods) for Python 3 via a metaclass and type annotations.reDune: ?????????? ???? ? ????? «????????? ? ???????? ???????». ???????? ?? Dune2000 ?? Westwood ? Electronic Arts.Rereadable: Keep page from internet for read it latter.ServStop: ServStop is a .NET application that makes it easy to stop several system services at once. Now you don't have to change startup types or stop them one at a time. It has a simple list-based interface with the ability to save and load lists of user services to stop. Written in C#.SharePoint 2010 Audience Membership Workflow Activity (Full Trust): A simple SharePoint 2010 workflow activity / workflow condition to check whether the user initiating the workflow is a member of a specified audience. Farm-level .wsp solution, written in C#. Once installed, the workflow activity can be used in SharePoint Designer 2010 declarative workflows.SQL Server® to Firebird DB converter: Converts Microsoft SQL Server® database into Firebird database including entire structure and datastegitest: test projectSystem.Threading.Joins: The Joins project provides asynchronous concurrency semantics based on join calculus and modeled after the Microsoft Research C? (C Omega) project.TestAndroidGame: try dev a TestAndroidGametetribricks: block game Topographic Explorer: A project to import, convert, explore, manipulate, and save topographical maps. Looking to use C# and WPF.Trading: Under construction!!!Trombone: Trombone makes it easier for Windows Mobile Professional users to automate status reply through SMS. It's developed in Visual C# 2008.Tulsa SharePoint Interest Group: Repository for source code for the Tulsa SharePoint Interest Group's web site. The Tulsa SharePoint Interest Group is using the Community Kit for SharePoint. This project will house any modifications that are specific to our user group.World of Tanks RU tiny stats collection utilty.: Tiny utility to load players stats for World of Tanks RU server. Results saved to comma separated file.WS-Discovery Proxy: Attempt at creating general purpose WS-Discovery Proxy.Yamaha Tu?n Tr?c: This application is used to manage information for Yamaha Tu?n Tr?c

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Silverlight 2.0 - Can't get the text wrapping behaviour that I want

    - by Anthony
    I am having trouble getting Silverlight 2.0 to lay out text exactly how I want. I want text with line breaks and embedded links, with wrapping, like HTML text in a web page. Here's the closest that I have come: <UserControl x:Class="FlowPanelTest.Page" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Controls="clr-namespace:Microsoft.Windows.Controls;assembly=Microsoft.Windows.Controls" Width="250" Height="300"> <Border BorderBrush="Black" BorderThickness="2" > <Controls:WrapPanel> <TextBlock x:Name="tb1" TextWrapping="Wrap">Short text. </TextBlock> <TextBlock x:Name="tb2" TextWrapping="Wrap">A bit of text. </TextBlock> <TextBlock x:Name="tb3" TextWrapping="Wrap">About half of a line of text.</TextBlock> <TextBlock x:Name="tb4" TextWrapping="Wrap">More than half a line of longer text.</TextBlock> <TextBlock x:Name="tb5" TextWrapping="Wrap">More than one line of text, so it will wrap onto the following line.</TextBlock> </Controls:WrapPanel> </Border> </UserControl> But the issue is that although the text blocks tb1 and tb2 will go onto the same line because there is room enough for them completely, tb3 onwards will not start on the same line as the previous block, even though it will wrap onto following lines. I want each text block to start where the previous one ends, on the same line. I want to put click event handlers on some of the text. I also want paragraph breaks. Essentially I'm trying to work around the lack of FlowDocument and Hyperlink controls in Silverlight 2.0's subset of XAML. To answer the questions posed in the answers: Why not use runs for the non-clickable text? If I just use individual TextBlocks only on the clickable text, then those bits of text will still suffer from the wrapping problem illustrated above. And the TextBlock just before the link, and the TextBlock just after. Essentially all of it. It doesn't look like I have many opportunities for putting multiple runs in the same TextBlock. Dividing the links from the other text with RegExs and loops is not the issue at all, the issue is display layout. Why not put each word in an individual TextBlock in a WrapPanel Aside from being an ugly hack, this does not play at all well with linebreaks - the layout is incorrect. It would also make the underline style of linked text into a broken line. Here's an example with each word in its own TextBlock. Try running it, note that the linebreak isn't shown in the right place at all. <UserControl x:Class="SilverlightApplication2.Page" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Controls="clr-namespace:Microsoft.Windows.Controls;assembly=Microsoft.Windows.Controls" Width="300" Height="300"> <Controls:WrapPanel> <TextBlock TextWrapping="Wrap">Short1 </TextBlock> <TextBlock TextWrapping="Wrap">Longer1 </TextBlock> <TextBlock TextWrapping="Wrap">Longerest1 </TextBlock> <TextBlock TextWrapping="Wrap"> <Run>Break</Run> <LineBreak></LineBreak> </TextBlock> <TextBlock TextWrapping="Wrap">Short2</TextBlock> <TextBlock TextWrapping="Wrap">Longer2</TextBlock> <TextBlock TextWrapping="Wrap">Longerest2</TextBlock> <TextBlock TextWrapping="Wrap">Short3</TextBlock> <TextBlock TextWrapping="Wrap">Longer3</TextBlock> <TextBlock TextWrapping="Wrap">Longerest3</TextBlock> </Controls:WrapPanel> </UserControl> What about The LinkLabelControl as here and here. It has the same problems as the approach above, since it's much the same. Try running the sample, and make the link text longer and longer until it wraps. Note that the link starts on a new line, which it shouldn't. Make the link text even longer, so that the link text is longer than a line. Note that it doesn't wrap at all, it cuts off. This control doesn't handle line breaks and paragraph breaks either. Why not put the text all in runs, detect clicks on the containing TextBlock and work out which run was clicked Runs do not have mouse events, but the containing TextBlock does. I can't find a way to check if the run is under the mouse (IsMouseOver is not present in SilverLight) or to find the bounding geometry of the run (no clip property). There is VisualTreeHelper.FindElementsInHostCoordinates() The code below uses VisualTreeHelper.FindElementsInHostCoordinates to get the controls under the click. The output lists the TextBlock but not the Run, since a Run is not a UiElement. private void theText_MouseLeftButtonDown(object sender, System.Windows.Input.MouseButtonEventArgs e) { // get the elements under the click UIElement uiElementSender = sender as UIElement; Point clickPos = e.GetPosition(uiElementSender); var UiElementsUnderClick = VisualTreeHelper.FindElementsInHostCoordinates(clickPos, uiElementSender); // show the controls string outputText = ""; foreach (var uiElement in UiElementsUnderClick) { outputText += uiElement.GetType().ToString() + "\n"; } this.outText.Text = outputText; } Use an empty text block with a margin to space following content onto a following line I'm still thinking about this one. How do you calculate the right width for a line-breaking block to force following content onto the following line? Too short and the following content will still be on the same line, at the right. Too long and the "linebreak" will be on the following line, with content after it. You would have to resize the breaks when the control is resized. Some of the code for this is: TextBlock lineBreak = new TextBlock(); lineBreak.TextWrapping = TextWrapping.Wrap; lineBreak.Text = " "; // need adaptive width lineBreak.Margin = new Thickness(0, 0, 200, 0);

    Read the article

  • Problems Mapping a List of Serializable Objets with JDO

    - by Sergio del Amo
    I have two classes Invoice and InvoiceItem. I would like Invoice to have a List of InvoiceItem Objets. I have red that the list must be of primitive or serializable objects. I have made InvoiceItem Serializable. Invoice.java looks like import java.util.ArrayList; import java.util.Date; import java.util.List; import javax.jdo.annotations.Column; import javax.jdo.annotations.Embedded; import javax.jdo.annotations.EmbeddedOnly; import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.Element; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.Key; import com.softamo.pelicamo.shared.InvoiceCompanyDTO; @PersistenceCapable(identityType = IdentityType.APPLICATION) public class Invoice { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private String number; @Persistent private Date date; @Persistent private List<InvoiceItem> items = new ArrayList<InvoiceItem>(); public Invoice() {} public Long getId() { return id; } public void setId(Long id) {this.id = id;} public String getNumber() { return number;} public void setNumber(String invoiceNumber) { this.number = invoiceNumber;} public Date getDate() { return date;} public void setDate(Date invoiceDate) { this.date = invoiceDate;} public List<InvoiceItem> getItems() { return items;} public void setItems(List<InvoiceItem> items) { this.items = items;} } and InvoiceItem.java looks like import java.io.Serializable; import java.math.BigDecimal; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; @PersistenceCapable public class InvoiceItem implements Serializable { @Persistent private BigDecimal amount; @Persistent private float quantity; public InvoiceItem() {} public BigDecimal getAmount() { return amount;} public void setAmount(BigDecimal amount) { this.amount = amount;} public float getQuantity() { return quantity;} public void setQuantity(float quantity) { this.quantity = quantity;} } I get the next error while running a JUnit test. javax.jdo.JDOUserException: Attempt to handle persistence for object using datastore-identity yet StoreManager for this datastore doesn't support that identity type at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:375) at org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:674) at org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:694) at com.softamo.pelicamo.server.InvoiceStore.add(InvoiceStore.java:23) at com.softamo.pelicamo.server.PopulateStorage.storeInvoices(PopulateStorage.java:58) at com.softamo.pelicamo.server.PopulateStorage.run(PopulateStorage.java:46) at com.softamo.pelicamo.server.InvoiceStoreTest.setUp(InvoiceStoreTest.java:44) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) NestedThrowablesStackTrace: Attempt to handle persistence for object using datastore-identity yet StoreManager for this datastore doesn't support that identity type org.datanucleus.exceptions.NucleusUserException: Attempt to handle persistence for object using datastore-identity yet StoreManager for this datastore doesn't support that identity type at org.datanucleus.state.AbstractStateManager.<init>(AbstractStateManager.java:128) at org.datanucleus.state.JDOStateManagerImpl.<init>(JDOStateManagerImpl.java:215) at org.datanucleus.jdo.JDOAdapter.newStateManager(JDOAdapter.java:119) at org.datanucleus.state.StateManagerFactory.newStateManagerForPersistentNew(StateManagerFactory.java:150) at org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1297) at org.datanucleus.sco.SCOUtils.validateObjectForWriting(SCOUtils.java:1476) at org.datanucleus.store.mapped.scostore.ElementContainerStore.validateElementForWriting(ElementContainerStore.java:380) at org.datanucleus.store.mapped.scostore.FKListStore.validateElementForWriting(FKListStore.java:609) at org.datanucleus.store.mapped.scostore.FKListStore.internalAdd(FKListStore.java:344) at org.datanucleus.store.appengine.DatastoreFKListStore.internalAdd(DatastoreFKListStore.java:146) at org.datanucleus.store.mapped.scostore.AbstractListStore.addAll(AbstractListStore.java:128) at org.datanucleus.store.mapped.mapping.CollectionMapping.postInsert(CollectionMapping.java:157) at org.datanucleus.store.appengine.DatastoreRelationFieldManager.runPostInsertMappingCallbacks(DatastoreRelationFieldManager.java:216) at org.datanucleus.store.appengine.DatastoreRelationFieldManager.access$200(DatastoreRelationFieldManager.java:47) at org.datanucleus.store.appengine.DatastoreRelationFieldManager$1.apply(DatastoreRelationFieldManager.java:115) at org.datanucleus.store.appengine.DatastoreRelationFieldManager.storeRelations(DatastoreRelationFieldManager.java:80) at org.datanucleus.store.appengine.DatastoreFieldManager.storeRelations(DatastoreFieldManager.java:955) at org.datanucleus.store.appengine.DatastorePersistenceHandler.storeRelations(DatastorePersistenceHandler.java:527) at org.datanucleus.store.appengine.DatastorePersistenceHandler.insertPostProcess(DatastorePersistenceHandler.java:299) at org.datanucleus.store.appengine.DatastorePersistenceHandler.insertObjects(DatastorePersistenceHandler.java:251) at org.datanucleus.store.appengine.DatastorePersistenceHandler.insertObject(DatastorePersistenceHandler.java:235) at org.datanucleus.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3185) at org.datanucleus.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3161) at org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1298) at org.datanucleus.ObjectManagerImpl.persistObject(ObjectManagerImpl.java:1175) at org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:669) at org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:694) at com.softamo.pelicamo.server.InvoiceStore.add(InvoiceStore.java:23) at com.softamo.pelicamo.server.PopulateStorage.storeInvoices(PopulateStorage.java:58) at com.softamo.pelicamo.server.PopulateStorage.run(PopulateStorage.java:46) at com.softamo.pelicamo.server.InvoiceStoreTest.setUp(InvoiceStoreTest.java:44) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Moreover, when I try to store an invoice with a list of items through my app. In the development console I can see that items are not persisted to any field while the rest of the invoice class properties are stored properly. Does anyone know what I am doing wrong? Solution As pointed in the answers, the error says that the InvoiceItem class was missing a primaryKey. I tried with: @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Long id; But I was getting javax.jdo.JDOFatalUserException: Error in meta-data for InvoiceItem.id: Cannot have a java.lang.Long primary key and be a child object (owning field is Invoice.items). In persist list of objets, @aldrin pointed that For child classes the primary key has to be a com.google.appengine.api.datastore.Key value (or encoded as a string) see So, I tried with Key. It worked. @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key id;

    Read the article

  • does red5 read tomcat-users.xml

    - by baba
    Hi, I have been busy creating an app for Red5. Imagine what was my surprise when I tried to configure basic/digest authentication and I couldn't. What struck me as strange is that I have a running tomcat instance that works and authenticates correctly with the following xmls: web.xml (part of) <security-constraint> <web-resource-collection> <web-resource-name>A Protected Page</web-resource-name> <url-pattern>/stats.jsp</url-pattern> </web-resource-collection> <auth-constraint> <description/> <role-name>tomcat</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>DIGEST</auth-method> <realm-name>BLAAAAAAAAAAAAAAAAA</realm-name> </login-config> <security-role> <description/> <role-name>tomcat</role-name> </security-role> and a tomcat-users.xml in /conf that looks kinda like this: <?xml version="1.0" encoding="UTF-8"?> <tomcat-users> <role rolename="tomcat"/> <user username="ide" password="bogus" roles="tomcat"/> </tomcat-users> The annoying thing is that configuration authenticates correctly when on tomcat's servlet container, but on the red5's modified one, it just keeps asking for authentication. Am I becoming mad or it should work like a charm? Red5 is version 0_9_1 The stats.jsp is accessible in both servlet containers, the only difference is that when you input the correct password and username in tomcat, you are logged in, and in red5 you are not, it just keeps asking you for the password. Any pointers? Am I missing something? Here is a stack trace of the error I receive AT the moment I try the login: Caused by: java.io.IOException: Unable to locate a login configuration at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:250) [na:1.6.0_22] at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:91) [na:1.6.0_22] ... 27 common frames omitted [ERROR] [http-127.0.0.1-5080-1] org.apache.catalina.realm.JAASRealm - Cannot find message associated with key jaasRealm.unexpectedError java.lang.SecurityException: Unable to locate a login configuration at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:93) [na:1.6.0_22] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [na:1.6.0_22] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) [na:1.6.0_22] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) [na:1.6.0_22] at java.lang.reflect.Constructor.newInstance(Constructor.java:513) [na:1.6.0_22] at java.lang.Class.newInstance0(Class.java:355) [na:1.6.0_22] at java.lang.Class.newInstance(Class.java:308) [na:1.6.0_22] at javax.security.auth.login.Configuration$3.run(Configuration.java:247) [na:1.6.0_22] at java.security.AccessController.doPrivileged(Native Method) [na:1.6.0_22] at javax.security.auth.login.Configuration.getConfiguration(Configuration.java:242) [na:1.6.0_22] at javax.security.auth.login.LoginContext$1.run(LoginContext.java:237) [na:1.6.0_22] at java.security.AccessController.doPrivileged(Native Method) [na:1.6.0_22] at javax.security.auth.login.LoginContext.init(LoginContext.java:234) [na:1.6.0_22] at javax.security.auth.login.LoginContext.<init>(LoginContext.java:403) [na:1.6.0_22] at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:394) [catalina-6.0.24.jar:na] at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:357) [catalina-6.0.24.jar:na] at org.apache.catalina.authenticator.DigestAuthenticator.findPrincipal(DigestAuthenticator.java:283) [catalina-6.0.24.jar:na] at org.apache.catalina.authenticator.DigestAuthenticator.authenticate(DigestAuthenticator.java:176) [catalina-6.0.24.jar:na] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:523) [catalina-6.0.24.jar:na] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) [catalina-6.0.24.jar:na] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) [catalina-6.0.24.jar:na] at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555) [catalina-6.0.24.jar:na] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) [catalina-6.0.24.jar:na] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) [catalina-6.0.24.jar:na] at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) [tomcat-coyote-6.0.24.jar:na] at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) [tomcat-coyote-6.0.24.jar:na] at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) [tomcat-coyote-6.0.24.jar:na] at java.lang.Thread.run(Thread.java:662) [na:1.6.0_22] Caused by: java.io.IOException: Unable to locate a login configuration at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:250) [na:1.6.0_22] at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:91) [na:1.6.0_22] ... 27 common frames omitted In addition, here is the configuration of red5-web.properties webapp.contextPath=/project Even futher information: Seems to me like it is using the right realm: MemoryRealm [INFO] [main] org.red5.server.tomcat.TomcatLoader - Setting connector: org.apache.catalina.connector.Connector [INFO] [main] org.red5.server.tomcat.TomcatLoader - Address to bind: /127.0.0.1:5080 [INFO] [main] org.red5.server.tomcat.TomcatLoader - Setting realm: org.apache.catalina.realm.MemoryRealm [INFO] [main] org.red5.server.tomcat.TomcatLoader - Loading tomcat context [INFO] [main] org.red5.server.tomcat.TomcatLoader - Server root: C:/Program Files/Red5 [INFO] [main] org.red5.server.tomcat.TomcatLoader - Config root: C:/Program Files/Red5/conf [INFO] [main] org.red5.server.tomcat.TomcatLoader - Application root: C:/Program Files/Red5/webapps [INFO] [main] org.red5.server.tomcat.TomcatLoader - Starting Tomcat servlet engine [INFO] [main] org.apache.catalina.startup.Embedded - Starting tomcat server [INFO] [main] org.apache.catalina.core.StandardEngine - Starting Servlet Engine: Apache Tomcat/6.0.26 However, immediately after bootstraping Tomcat, I am presented with the following error: Exception in thread "Launcher:/administration" org.springframework.beans.factory.BeanDefinitionStoreException: Could not resolve bean definition resource pattern [/WEB-INF/red5-*.xml]; nested exception is java.io.FileNotFoundException: ServletContext resource [/WEB-INF/] cannot be resolved to URL because it does not exist at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:190) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:458) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:388) at org.red5.server.tomcat.TomcatLoader$1.run(TomcatLoader.java:594) Caused by: java.io.FileNotFoundException: ServletContext resource [/WEB-INF/] cannot be resolved to URL because it does not exist at org.springframework.web.context.support.ServletContextResource.getURL(ServletContextResource.java:132) at org.springframework.core.io.support.PathMatchingResourcePatternResolver.isJarResource(PathMatchingResourcePatternResolver.java:414) at org.springframework.core.io.support.PathMatchingResourcePatternResolver.findPathMatchingResources(PathMatchingResourcePatternResolver.java:343) at org.springframework.core.io.support.PathMatchingResourcePatternResolver.getResources(PathMatchingResourcePatternResolver.java:282) at org.springframework.context.support.AbstractApplicationContext.getResources(AbstractApplicationContext.java:1156) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:177) ... 7 more This error is kinda strange, because after this it seems that /WEB-INF/ is found by the rest of the program by the following output: [INFO] [Launcher:/SOSample] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] [INFO] [Launcher:/installer] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] [INFO] [Launcher:/] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] [INFO] [Launcher:/LiveMedia] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] What really annoys me is that, as you can see in the output, when I try to login, I get a JAASRealm-related exception, but in the debug output when Tomcat is loading, it is clear to me that it expects a MemoryRealm. I was wondering where and how in red5.xml should I specify bean properties such that I force red5 to use MemoryRealm that is under /conf/tomcat-users.xml, because it certainly doesn't do so now. It seems like the biggest question I have posted so far, but I tried to explain it as fully as possible as to avoid confusion.

    Read the article

  • My sample app is getting crash while registering to Filechangeinfo notification

    - by Solitaire
    public partial class Form1 : Form { [DllImport("coredll.dll")] static extern int SetWindowLong(IntPtr hWnd, int nIndex, IntPtr dwNewLong); [DllImport("coredll.dll")] static extern IntPtr CallWindowProc(IntPtr lpPrevWndFunc, IntPtr hWnd, int Msg, IntPtr wParam, IntPtr lParam); [DllImport("coredll.dll")] public static extern IntPtr GetWindowLong(IntPtr hWnd, int nIndex); //public struct tagSHCHANGENOTIFYENTRY //{ // [MarshalAs(UnmanagedType.SysUInt)] // public ulong dwEventMask; // [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 4096)] // public string WatchDir; // [MarshalAs(UnmanagedType.Bool)] // public bool fRecursive; //} //tagSHCHANGENOTIFYENTRY test; //[DllImport("aygshell.dll")] //static extern bool SHChangeNotifyRegister(IntPtr hwnd, ref tagSHCHANGENOTIFYENTRY test); const int GWL_WNDPROC = -4; public delegate int WindProc(IntPtr hWnd, int msg, IntPtr Wparam, IntPtr lparam); static private WindProc SampleProc; IntPtr OldDefProc = IntPtr.Zero; public enum SHCNE : uint { SHCNE_RENAMEITEM = 0x00000001, SHCNE_CREATE = 0x00000002, SHCNE_DELETE = 0x00000004, SHCNE_MKDIR = 0x00000008, SHCNE_RMDIR = 0x00000010, SHCNE_MEDIAINSERTED = 0x00000020, SHCNE_MEDIAREMOVED = 0x00000040, SHCNE_DRIVEREMOVED = 0x00000080, SHCNE_DRIVEADD = 0x00000100, SHCNE_NETSHARE = 0x00000200, SHCNE_NETUNSHARE = 0x00000400, SHCNE_ATTRIBUTES = 0x00000800, SHCNE_UPDATEDIR = 0x00001000, SHCNE_UPDATEITEM = 0x00002000, SHCNE_SERVERDISCONNECT = 0x00004000, SHCNE_UPDATEIMAGE = 0x00008000, SHCNE_DRIVEADDGUI = 0x00010000, SHCNE_RENAMEFOLDER = 0x00020000, SHCNE_FREESPACE = 0x00040000, SHCNE_EXTENDED_EVENT = 0x04000000, SHCNE_ASSOCCHANGED = 0x08000000, SHCNE_DISKEVENTS = 0x0002381F, SHCNE_GLOBALEVENTS = 0x0C0581E0, SHCNE_ALLEVENTS = 0x7FFFFFFF, SHCNE_INTERRUPT = 0x80000000, } public enum SHCNF { SHCNF_IDLIST = 0x0000, SHCNF_PATHA = 0x0001, SHCNF_PRINTERA = 0x0002, SHCNF_DWORD = 0x0003, SHCNF_PATHW = 0x0005, SHCNF_PRINTERW = 0x0006, SHCNF_TYPE = 0x00FF, SHCNF_FLUSH = 0x1000, SHCNF_FLUSHNOWAIT = 0x2000 } public const uint WM_SHNOTIFY = 0x0401; private const int WM_FILECHANGEINFO = (0x8000 + 0x101); public struct SHChangeNotifyEntry { public IntPtr pIdl; [MarshalAs(UnmanagedType.Bool)] public Boolean Recursively; } [DllImport("coredll.dll", EntryPoint = "#2", CharSet = CharSet.Auto)] private static extern uint SHChangeNotifyRegister( IntPtr hWnd, SHCNF fSources, SHCNE fEvents, uint wMsg, int cEntries, ref SHChangeNotifyEntry pFsne); [DllImport("Ceshell.dll", CharSet = CharSet.Auto)] private static extern uint SHGetSpecialFolderLocation( IntPtr hWnd, CSIDL nFolder, out IntPtr Pidl); public enum CSIDL { /// <summary> /// Desktop /// </summary> CSIDL_DESKTOP = 0x0000, /// <summary> /// Internet Explorer (icon on desktop) /// </summary> CSIDL_INTERNET = 0x0001, /// <summary> /// Start Menu\Programs /// </summary> CSIDL_PROGRAMS = 0x0002, /// <summary> /// My Computer\Control Panel /// </summary> CSIDL_CONTROLS = 0x0003, /// <summary> /// My Computer\Printers /// </summary> CSIDL_PRINTERS = 0x0004, /// <summary> /// My Documents /// </summary> CSIDL_PERSONAL = 0x0005, /// <summary> /// user name\Favorites /// </summary> CSIDL_FAVORITES = 0x0006, /// <summary> /// Start Menu\Programs\Startup /// </summary> CSIDL_STARTUP = 0x0007, /// <summary> /// user name\Recent /// </summary> CSIDL_RECENT = 0x0008, /// <summary> /// user name\SendTo /// </summary> CSIDL_SENDTO = 0x0009, /// <summary> /// desktop\Recycle Bin /// </summary> CSIDL_BITBUCKET = 0x000a, /// <summary> /// user name\Start Menu /// </summary> CSIDL_STARTMENU = 0x000b, /// <summary> /// logical "My Documents" desktop icon /// </summary> CSIDL_MYDOCUMENTS = 0x000c, /// <summary> /// "My Music" folder /// </summary> CSIDL_MYMUSIC = 0x000d, /// <summary> /// "My Videos" folder /// </summary> CSIDL_MYVIDEO = 0x000e, /// <summary> /// user name\Desktop /// </summary> CSIDL_DESKTOPDIRECTORY = 0x0010, /// <summary> /// My Computer /// </summary> CSIDL_DRIVES = 0x0011, /// <summary> /// Network Neighborhood (My Network Places) /// </summary> CSIDL_NETWORK = 0x0012, /// <summary> /// user name>nethood /// </summary> CSIDL_NETHOOD = 0x0013, /// <summary> /// windows\fonts /// </summary> CSIDL_FONTS = 0x0014, CSIDL_TEMPLATES = 0x0015, /// <summary> /// All Users\Start Menu /// </summary> CSIDL_COMMON_STARTMENU = 0x0016, /// <summary> /// All Users\Start Menu\Programs /// </summary> CSIDL_COMMON_PROGRAMS = 0X0017, /// <summary> /// All Users\Startup /// </summary> CSIDL_COMMON_STARTUP = 0x0018, /// <summary> /// All Users\Desktop /// </summary> CSIDL_COMMON_DESKTOPDIRECTORY = 0x0019, /// <summary> /// user name\Application Data /// </summary> CSIDL_APPDATA = 0x001a, /// <summary> /// user name\PrintHood /// </summary> CSIDL_PRINTHOOD = 0x001b, /// <summary> /// user name\Local Settings\Applicaiton Data (non roaming) /// </summary> CSIDL_LOCAL_APPDATA = 0x001c, /// <summary> /// non localized startup /// </summary> CSIDL_ALTSTARTUP = 0x001d, /// <summary> /// non localized common startup /// </summary> CSIDL_COMMON_ALTSTARTUP = 0x001e, CSIDL_COMMON_FAVORITES = 0x001f, CSIDL_INTERNET_CACHE = 0x0020, CSIDL_COOKIES = 0x0021, CSIDL_HISTORY = 0x0022, /// <summary> /// All Users\Application Data /// </summary> CSIDL_COMMON_APPDATA = 0x0023, /// <summary> /// GetWindowsDirectory() /// </summary> CSIDL_WINDOWS = 0x0024, /// <summary> /// GetSystemDirectory() /// </summary> CSIDL_SYSTEM = 0x0025, /// <summary> /// C:\Program Files /// </summary> CSIDL_PROGRAM_FILES = 0x0026, /// <summary> /// C:\Program Files\My Pictures /// </summary> CSIDL_MYPICTURES = 0x0027, /// <summary> /// USERPROFILE /// </summary> CSIDL_PROFILE = 0x0028, /// <summary> /// x86 system directory on RISC /// </summary> CSIDL_SYSTEMX86 = 0x0029, /// <summary> /// x86 C:\Program Files on RISC /// </summary> CSIDL_PROGRAM_FILESX86 = 0x002a, /// <summary> /// C:\Program Files\Common /// </summary> CSIDL_PROGRAM_FILES_COMMON = 0x002b, /// <summary> /// x86 Program Files\Common on RISC /// </summary> CSIDL_PROGRAM_FILES_COMMONX86 = 0x002c, /// <summary> /// All Users\Templates /// </summary> CSIDL_COMMON_TEMPLATES = 0x002d, /// <summary> /// All Users\Documents /// </summary> CSIDL_COMMON_DOCUMENTS = 0x002e, /// <summary> /// All Users\Start Menu\Programs\Administrative Tools /// </summary> CSIDL_COMMON_ADMINTOOLS = 0x002f, /// <summary> /// user name\Start Menu\Programs\Administrative Tools /// </summary> CSIDL_ADMINTOOLS = 0x0030, /// <summary> /// Network and Dial-up Connections /// </summary> CSIDL_CONNECTIONS = 0x0031, /// <summary> /// All Users\My Music /// </summary> CSIDL_COMMON_MUSIC = 0x0035, /// <summary> /// All Users\My Pictures /// </summary> CSIDL_COMMON_PICTURES = 0x0036, /// <summary> /// All Users\My Video /// </summary> CSIDL_COMMON_VIDEO = 0x0037, /// <summary> /// Resource Direcotry /// </summary> CSIDL_RESOURCES = 0x0038, /// <summary> /// Localized Resource Direcotry /// </summary> CSIDL_RESOURCES_LOCALIZED = 0x0039, /// <summary> /// Links to All Users OEM specific apps /// </summary> CSIDL_COMMON_OEM_LINKS = 0x003a, /// <summary> /// USERPROFILE\Local Settings\Application Data\Microsoft\CD Burning /// </summary> CSIDL_CDBURN_AREA = 0x003b, /// <summary> /// Computers Near Me (computered from Workgroup membership) /// </summary> CSIDL_COMPUTERSNEARME = 0x003d, /// <summary> /// combine with CSIDL_ value to force folder creation in SHGetFolderPath() /// </summary> CSIDL_FLAG_CREATE = 0x8000, /// <summary> /// combine with CSIDL_ value to return an unverified folder path /// </summary> CSIDL_FLAG_DONT_VERIFY = 0x4000, /// <summary> /// combine with CSIDL_ value to insure non-alias versions of the pidl /// </summary> CSIDL_FLAG_NO_ALIAS = 0x1000, /// <summary> /// combine with CSIDL_ value to indicate per-user init (eg. upgrade) /// </summary> CSIDL_FLAG_PER_USER_INIT = 0x0800, /// <summary> /// mask for all possible /// </summary> CSIDL_FLAG_MASK = 0xFF00, } public enum SHGetFolderLocationReturnValues : uint { /// <summary> /// Success /// </summary> S_OK = 0x00000000, /// <summary> /// The CSIDL in nFolder is valid but the folder does not exist /// </summary> S_FALSE = 0x00000001, /// <summary> /// The CSIDL in nFolder is not valid /// </summary> E_INVALIDARG = 0x80070057 } public static IntPtr GetPidlFromFolderID(IntPtr hWnd, CSIDL Id) { IntPtr pIdl = IntPtr.Zero; SHGetFolderLocationReturnValues res = (SHGetFolderLocationReturnValues) SHGetSpecialFolderLocation( hWnd, Id, out pIdl); return (pIdl); } public Form1() { InitializeComponent(); SampleProc = new WindProc (SubclassWndProc); OldDefProc = GetWindowLong(this.Handle, GWL_WNDPROC); SetWindowLong(this.Handle, GWL_WNDPROC, Marshal.GetFunctionPointerForDelegate(SampleProc)/*SampleProc.Method.MethodHandle.Value.ToInt32()*/); //tagSHCHANGENOTIFYENTRY changeentry = new tagSHCHANGENOTIFYENTRY(); //changeentry.dwEventMask = (ulong)SHCNE.SHCNE_ALLEVENTS; //changeentry.fRecursive = true; //changeentry.WatchDir = null; //SHChangeNotifyRegister(this.Handle, ref changeentry); SHChangeNotifyEntry changeentry = new SHChangeNotifyEntry(); changeentry.pIdl = GetPidlFromFolderID(this.Handle, CSIDL.CSIDL_DESKTOP); changeentry.Recursively = true; try { uint notifyid = SHChangeNotifyRegister( this.Handle, SHCNF.SHCNF_TYPE | SHCNF.SHCNF_IDLIST, SHCNE.SHCNE_ALLEVENTS, WM_FILECHANGEINFO, 1, ref changeentry); } catch (Exception ee) { } i am failing in SHChangeNotifyRegister please help me.. tell me the reason why i am crashing..same code work fine for desktop.. please help Thanks.

    Read the article

  • Ideas for multiplatform encrypted java mobile storage system

    - by Fernando Miguélez
    Objective I am currently designing the API for a multiplatform storage system that would offer same interface and capabilities accross following supported mobile Java Platforms: J2ME. Minimum configuration/profile CLDC 1.1/MIDP 2.0 with support for some necessary JSRs (JSR-75 for file storage). Android. No minimum platform version decided yet, but rather likely could be API level 7. Blackberry. It would use the same base source of J2ME but taking advantage of some advaced capabilities of the platform. No minimum configuration decided yet (maybe 4.6 because of 64 KB limitation for RMS on 4.5). Basically the API would sport three kind of stores: Files. These would allow standard directory/file manipulation (read/write through streams, create, mkdir, etc.). Preferences. It is a special store that handles properties accessed through keys (Similar to plain old java properties file but supporting some improvements such as different value data types such as SharedPreferences on Android platform) Local Message Queues. This store would offer basic message queue functionality. Considerations Inspired on JSR-75, all types of stores would be accessed in an uniform way by means of an URL following RFC 1738 conventions, but with custom defined prefixes (i.e. "file://" for files, "prefs://" for preferences or "queue://" for message queues). The address would refer to a virtual location that would be mapped to a physical storage object by each mobile platform implementation. Only files would allow hierarchical storage (folders) and access to external extorage memory cards (by means of a unit name, the same way as in JSR-75, but that would not change regardless of underlying platform). The other types would only support flat storage. The system should also support a secure version of all basic types. The user would indicate it by prefixing "s" to the URL (i.e. "sfile://" instead of "file://"). The API would only require one PIN (introduced only once) to access any kind of secure object types. Implementation issues For the implementation of both plaintext and encrypted stores, I would use the functionality available on the underlying platforms: Files. These are available on all platforms (J2ME only with JSR-75, but it is mandatory for our needs). The abstract File to actual File mapping is straight except for addressing issues. RMS. This type of store available on J2ME (and Blackberry) platforms is convenient for Preferences and maybe Message Queues (though depending on performance or size requirements these could be implemented by means of normal files). SharedPreferences. This type of storage, only available on Android, would match Preferences needs. SQLite databases. This could be used for message queues on Android (and maybe Blackberry). When it comes to encryption some requirements should be met: To ease the implementation it will be carried out on read/write operations basis on streams (for files), RMS Records, SharedPreferences key-value pairs, SQLite database columns. Every underlying storage object should use the same encryption key. Handling of encrypted stores should be the same as the unencrypted counterpart. The only difference (from the user point of view) accessing an encrypted store would be the addressing. The user PIN provides access to any secure storage object, but the change of it would not require to decrypt/re-encrypt all the encrypted data. Cryptographic capabilities of underlying platform should be used whenever it is possible, so we would use: J2ME: SATSA-CRYPTO if it is available (not mandatory) or lightweight BoncyCastle cryptographic framework for J2ME. Blackberry: RIM Cryptographic API or BouncyCastle Android: JCE with integraced cryptographic provider (BouncyCastle?) Doubts Having reached this point I was struck by some doubts about what solution would be more convenient, taking into account the limitation of the plataforms. These are some of my doubts: Encryption Algorithm for data. Would AES-128 be strong and fast enough? What alternatives for such scenario would you suggest? Encryption Mode. I have read about the weakness of ECB encryption versus CBC, but in this case the first would have the advantage of random access to blocks, which is interesting for seek functionality on files. What type of encryption mode would you choose instead? Is stream encryption suitable for this case? Key generation. There could be one key generated for each storage object (file, RMS RecordStore, etc.) or just use one for all the objects of the same type. The first seems "safer", though it would require some extra space on device. In your opinion what would the trade-offs of each? Key storage. For this case using a standard JKS (or PKCS#12) KeyStore file could be suited to store encryption keys, but I could also define a smaller structure (encryption-transformation / key data / checksum) that could be attached to each storage store (i.e. using addition files with the same name and special extension for plain files or embedded inside other types of objects such as RMS Record Stores). What approach would you prefer? And when it comes to using a standard KeyStore with multiple-key generation (given this is your preference), would it be better to use a record-store per storage object or just a global KeyStore keeping all keys (i.e. using the URL identifier of abstract storage object as alias)? Master key. The use of a master key seems obvious. This key should be protected by user PIN (introduced only once) and would allow access to the rest of encryption keys (they would be encrypted by means of this master key). Changing the PIN would only require to reencrypt this key and not all the encrypted data. Where would you keep it taking into account that if this got lost all data would be no further accesible? What further considerations should I take into account? Platform cryptography support. Do SATSA-CRYPTO-enabled J2ME phones really take advantage of some dedicated hardware acceleration (or other advantage I have not foreseen) and would this approach be prefered (whenever possible) over just BouncyCastle implementation? For the same reason is RIM Cryptographic API worth the license cost over BouncyCastle? Any comments, critics, further considerations or different approaches are welcome.

    Read the article

  • background image not showing in html

    - by Registered User
    I am having following css <!DOCTYPE html > <html> <head> <meta charset="utf-8"> <title>Black Goose Bistro Summer Menu</title> <link href='http://fonts.googleapis.com/css?family=Marko+One' rel='stylesheet' type='text/css'> <style> body { font-family: Georgia, serif; font-size: 100%; line-height: 175%; margin: 0 15% 0; background-image:url(images/bullseye.png); } #header { margin-top: 0; padding: 3em 1em 2em 1em; text-align: center; } a { text-decoration: none; } h1 { font: bold 1.5em Georgia, serif; text-shadow: .1em .1em .2em gray; } h2 { font-size: 1em; text-transform: uppercase; letter-spacing: .5em; text-align: center; } dt { font-weight: bold; } strong { font-style: italic; } ul { list-style-type: none; margin: 0; padding: 0; } #info p { font-style: italic; } .price { font-family: Georgia, serif; font-style: italic; } p.warning, sup { font-size: small; } .label { font-weight: bold; font-variant: small-caps; font-style: normal; } h2 + p { text-align: center; font-style: italic; } ); </style> </head> <body> <div id="header"> <h1>Black Goose Bistro &bull; Summer Menu</h1> <div id="info"> <p>Baker's Corner, Seekonk, Massachusetts<br> <span class="label">Hours: Monday through Thursday:</span> 11 to 9, <span class="label">Friday and Saturday;</span> 11 to midnight</p> <ul> <li><a href="#appetizers">Appetizers</a></li> <li><a href="#entrees">Main Courses</a></li> <li><a href="#toast">Traditional Toasts</a></li> <li><a href="#dessert">Dessert Selection</a></li> </ul> </div> </div> <div id="appetizers"> <h2>Appetizers</h2> <p>This season, we explore the spicy flavors of the southwest in our appetizer collection.</p> <dl> <dt>Black bean purses</dt> <dd>Spicy black bean and a blend of mexican cheeses wrapped in sheets of phyllo and baked until golden. <span class="price">$3.95</span></dd> <dt class="newitem">Southwestern napoleons with lump crab &mdash; <strong>new item!</strong></dt> <dd>Layers of light lump crab meat, bean and corn salsa, and our handmade flour tortillas. <span class="price">$7.95</span></dd> </dl> </div> <div id="entrees"> <h2>Main courses</h2> <p>Big, bold flavors are the name of the game this summer. Allow us to assist you with finding the perfect wine.</p> <dl> <dt class="newitem">Jerk rotisserie chicken with fried plantains &mdash; <strong>new item!</strong></dt> <dd>Tender chicken slow-roasted on the rotisserie, flavored with spicy and fragrant jerk sauce and served with fried plantains and fresh mango. <strong>Very spicy.</strong> <span class="price">$12.95</span></dd> <dt>Shrimp sate kebabs with peanut sauce</dt> <dd>Skewers of shrimp marinated in lemongrass, garlic, and fish sauce then grilled to perfection. Served with spicy peanut sauce and jasmine rice. <span class="price">$12.95</span></dd> <dt>Grilled skirt steak with mushroom fricasee</dt> <dd>Flavorful skirt steak marinated in asian flavors grilled as you like it<sup>*</sup>. Served over a blend of sauteed wild mushrooms with a side of blue cheese mashed potatoes. <span class="price">$16.95</span></dd> </dl> </div> <div id="toast"> <h2>Traditional Toasts</h2> <p>The ultimate comfort food, our traditional toast recipes are adapted from <a href="http://www.gutenberg.org/files/13923/13923-h/13923-h.htm"><cite>The Whitehouse Cookbook</cite></a> published in 1887.</p> <dl> <dt>Cream toast</dt> <dd>Simple cream sauce over highest quality toasted bread, baked daily. <span class="price">$3.95</span></dd> <dt>Mushroom toast</dt> <dd>Layers of light lump crab meat, bean and corn salsa, and our handmade flour tortillas. <span class="price">$6.95</span></dd> <dt>Nun's toast</dt> <dd>Onions and hard-boiled eggs in a cream sauce over buttered hot toast. <span class="price">$6.95</span></dd> <dt>Apple toast</dt> <dd>Sweet, cinnamon stewed apples over delicious buttery grilled bread. <span class="price">$6.95</span></dd> </dl> </div> <div id="dessert"> <h2>Dessert Selection</h2> <p>Be sure to save room for our desserts, made daily by our own <a href="http://www.jwu.edu/college.aspx?id=19510">Johnson & Wales</a> trained pastry chef.</p> <dl> <dt class="newitem">Lemon chiffon cake &mdash; <strong>new item!</strong></dt> <dd>Light and citrus flavored sponge cake with buttercream frosting as light as a cloud. <span class="price">$2.95</span></dd> <dt class="newitem">Molten chocolate cake</dt> <dd>Bubba's special dark chocolate cake with a warm, molten center. Served with or without a splash of almond liqueur. <span class="price">$3.95</span></dd> </dl> </div> <p class="warning"><sup>*</sup> We are required to warn you that undercooked food is a health risk.</p> </body> </html> but the background image does not appear in body tag you can see background-image:url(images/bullseye.png); this html page is bistro.html and the directory in which it is contained there is a folder images and inside images folder I have a file bullseye.png .I expect the png to appear in background.But that does not happen. For sake of question I am posting the image here also Let me know if the syntax of css wrong? following is image http://i.stack.imgur.com/YUKgg.png

    Read the article

  • Add UIView and UILabel to UICollectionViewCell. Then Segue based on clicked cell index

    - by JetSet
    I am new to collection views in Objective-C. Can anyone tell me why I can't see my UILabel embedded in the transparent UIView and the best way to resolve. I want to also segue from the cell to several various UIViewControllers based on the selected index cell. I am using GitHub project https://github.com/mayuur/MJParallaxCollectionView Overall, in MJRootViewController.m I wanted to add a UIView with a transparency and a UILabel with details of the cell from a array. MJCollectionViewCell.h // MJCollectionViewCell.h // RCCPeakableImageSample // // Created by Mayur on 4/1/14. // Copyright (c) 2014 RCCBox. All rights reserved. // #import <UIKit/UIKit.h> #define IMAGE_HEIGHT 200 #define IMAGE_OFFSET_SPEED 25 @interface MJCollectionViewCell : UICollectionViewCell /* image used in the cell which will be having the parallax effect */ @property (nonatomic, strong, readwrite) UIImage *image; /* Image will always animate according to the imageOffset provided. Higher the value means higher offset for the image */ @property (nonatomic, assign, readwrite) CGPoint imageOffset; //@property (nonatomic,readwrite) UILabel *textLabel; @property (weak, nonatomic) IBOutlet UILabel *textLabel; @property (nonatomic,readwrite) NSString *text; @property(nonatomic,readwrite) CGFloat x,y,width,height; @property (nonatomic,readwrite) NSInteger lineSpacing; @property (nonatomic, strong) IBOutlet UIView* overlayView; @end MJCollectionViewCell.m // // MJCollectionViewCell.m // RCCPeakableImageSample // // Created by Mayur on 4/1/14. // Copyright (c) 2014 RCCBox. All rights reserved. // #import "MJCollectionViewCell.h" @interface MJCollectionViewCell() @property (nonatomic, strong, readwrite) UIImageView *MJImageView; @end @implementation MJCollectionViewCell - (instancetype)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) [self setupImageView]; return self; } - (id)initWithCoder:(NSCoder *)aDecoder { self = [super initWithCoder:aDecoder]; if (self) [self setupImageView]; return self; } /* // Only override drawRect: if you perform custom drawing. // An empty implementation adversely affects performance during animation. - (void)drawRect:(CGRect)rect { // Drawing code } */ #pragma mark - Setup Method - (void)setupImageView { // Clip subviews self.clipsToBounds = YES; // Add image subview self.MJImageView = [[UIImageView alloc] initWithFrame:CGRectMake(self.bounds.origin.x, self.bounds.origin.y, self.bounds.size.width, IMAGE_HEIGHT)]; self.MJImageView.backgroundColor = [UIColor redColor]; self.MJImageView.contentMode = UIViewContentModeScaleAspectFill; self.MJImageView.clipsToBounds = NO; [self addSubview:self.MJImageView]; } # pragma mark - Setters - (void)setImage:(UIImage *)image { // Store image self.MJImageView.image = image; // Update padding [self setImageOffset:self.imageOffset]; } - (void)setImageOffset:(CGPoint)imageOffset { // Store padding value _imageOffset = imageOffset; // Grow image view CGRect frame = self.MJImageView.bounds; CGRect offsetFrame = CGRectOffset(frame, _imageOffset.x, _imageOffset.y); self.MJImageView.frame = offsetFrame; } - (void)setText:(NSString *)text{ _text=text; if (!self.textLabel) { CGFloat realH=self.height*2/3-self.lineSpacing; CGFloat latoA=realH/3; // self.textLabel=[[UILabel alloc] initWithFrame:CGRectMake(10,latoA/2, self.width-20, realH)]; self.textLabel.layer.anchorPoint=CGPointMake(.5, .5); self.textLabel.font=[UIFont fontWithName:@"HelveticaNeue-ultralight" size:38]; self.textLabel.numberOfLines=3; self.textLabel.textColor=[UIColor whiteColor]; self.textLabel.shadowColor=[UIColor blackColor]; self.textLabel.shadowOffset=CGSizeMake(1, 1); self.textLabel.transform=CGAffineTransformMakeRotation(-(asin(latoA/(sqrt(self.width*self.width+latoA*latoA))))); [self addSubview:self.textLabel]; } self.textLabel.text=text; } @end MJViewController.h // // MJViewController.h // ParallaxImages // // Created by Mayur on 4/1/14. // Copyright (c) 2014 sky. All rights reserved. // #import <UIKit/UIKit.h> @interface MJRootViewController : UIViewController{ NSInteger choosed; } @end MJViewController.m // // MJViewController.m // ParallaxImages // // Created by Mayur on 4/1/14. // Copyright (c) 2014 sky. All rights reserved. // #import "MJRootViewController.h" #import "MJCollectionViewCell.h" @interface MJRootViewController () <UICollectionViewDataSource, UICollectionViewDelegate, UIScrollViewDelegate> @property (weak, nonatomic) IBOutlet UICollectionView *parallaxCollectionView; @property (nonatomic, strong) NSMutableArray* images; @end @implementation MJRootViewController - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. //self.navigationController.navigationBarHidden=YES; // Fill image array with images NSUInteger index; for (index = 0; index < 14; ++index) { // Setup image name NSString *name = [NSString stringWithFormat:@"image%03ld.jpg", (unsigned long)index]; if(!self.images) self.images = [NSMutableArray arrayWithCapacity:0]; [self.images addObject:name]; } [self.parallaxCollectionView reloadData]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } #pragma mark - UICollectionViewDatasource Methods - (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section { return self.images.count; } - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath { MJCollectionViewCell* cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"MJCell" forIndexPath:indexPath]; //get image name and assign NSString* imageName = [self.images objectAtIndex:indexPath.item]; cell.image = [UIImage imageNamed:imageName]; //set offset accordingly CGFloat yOffset = ((self.parallaxCollectionView.contentOffset.y - cell.frame.origin.y) / IMAGE_HEIGHT) * IMAGE_OFFSET_SPEED; cell.imageOffset = CGPointMake(0.0f, yOffset); NSString *text; NSInteger index=choosed>=0 ? choosed : indexPath.row%5; switch (index) { case 0: text=@"I am the home cell..."; break; case 1: text=@"I am next..."; break; case 2: text=@"Cell 3..."; break; case 3: text=@"Cell 4..."; break; case 4: text=@"The last cell"; break; default: break; } cell.text=text; cell.overlayView.backgroundColor = [UIColor colorWithWhite:0.0f alpha:0.4f]; //cell.textLabel.text = @"Label showing"; cell.textLabel.font = [UIFont boldSystemFontOfSize:22.0f]; cell.textLabel.textColor = [UIColor whiteColor]; //This is another attempt to display the label by using tags. //UILabel* label = (UILabel*)[cell viewWithTag:1]; //label.text = @"Label works"; return cell; } #pragma mark - UIScrollViewdelegate methods - (void)scrollViewDidScroll:(UIScrollView *)scrollView { for(MJCollectionViewCell *view in self.parallaxCollectionView.visibleCells) { CGFloat yOffset = ((self.parallaxCollectionView.contentOffset.y - view.frame.origin.y) / IMAGE_HEIGHT) * IMAGE_OFFSET_SPEED; view.imageOffset = CGPointMake(0.0f, yOffset); } } @end

    Read the article

  • Visualising a 'Smarties' lid using XAML (WPF/Silverlight, Visual Studio/Blend)

    - by Mr. Disappointment
    Hi folks, First off, to clarify something in the title which could well be ambiguous/misleading, I'd like to inform you of my definition of 'Smarties', as I know often products are available all over - only under a different alias. Smarties are a candy product in the UK, little chocolate drops covered in a crispy shell which are distributed in a card tube, this tube used to have a plastic lid/top with an individual letter on the underside (they've taken a more economical approach as of late), the lid/top of the old-style tube is the main element of this question. Familiarisation Link Lid View Link Okay, now with the seller-type pitch out of the way (no, I don't work for Nestlé ;)), hopefully the question is becoming rather clear. Essentially, I'd like to recreate one of these lids using XAML, ultimately to be utilised in a Silverlight web application. That is, I'd like to result in a reusable control, of which the following is true: It looks like a Smarties lid. The colour can be specified. The letter can be specified. The control can be rotated to display either side. The second two seem trivial, but we must bare in mind that the background colour specified will almost, if not always, be the same as the foreground, leaving a visibility issue where the character content is concerned; as for the rotation, I'm hoping this kind of functionality is reasonably available, and acceptable to implement. So, to put this out there, consider a control named SmartiesLid which derives from ToggleButton (appropriate?) and further plotted out using a style in a resource dictionary which applies to it, as follows: <Style TargetType="local:SmartiesLid"> <Setter Property="Background" Value="Red"/> <Setter Property="Foreground" Value="Red"/> <Setter Property="VerticalContentAlignment" Value="Center"/> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="local:SmartiesLid"> <Grid x:Name="LayoutRoot"> <Grid.ColumnDefinitions> <ColumnDefinition Width=".05*"/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition Width=".05*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height=".05*"/> <RowDefinition/> <RowDefinition/> <RowDefinition Height=".05*"/> <RowDefinition Height=".1*"/> </Grid.RowDefinitions> <Ellipse Grid.RowSpan="4" Grid.ColumnSpan="4" Fill="{TemplateBinding Background}" Stroke="Transparent"/> <Ellipse Grid.RowSpan="2" Grid.ColumnSpan="2" Grid.Column="1" Grid.Row="1" Fill="{TemplateBinding Background}" Stroke="Transparent"> <Ellipse.Effect> <DropShadowEffect Direction="280" ShadowDepth="6" BlurRadius="6"/> </Ellipse.Effect> </Ellipse> <TextBlock Grid.RowSpan="2" Grid.ColumnSpan="2" Grid.Column="1" Grid.Row="1" Name="LetterTextBlock" Text="{TemplateBinding Content}" Foreground="{TemplateBinding Foreground}" FontSize="190" HorizontalAlignment="Center" VerticalAlignment="Center"> </TextBlock> <!-- <Path Stretch="Fill" Grid.Row="3" Grid.RowSpan="2" Grid.Column="1" Grid.ColumnSpan="2" Fill="Black" Data="..."> How to craw the lid 'tab'? </Path> --> </Grid> <ControlTemplate.Resources> <TranslateTransform x:Key="IndentTransform" X="10" /> <RotateTransform x:Key="RotateTransform" Angle="0" /> <Storyboard x:Key="MouseOver"> </Storyboard> <Storyboard x:Key="MouseLeave"> </Storyboard> </ControlTemplate.Resources> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="true"> <Trigger.EnterActions> <BeginStoryboard Storyboard="{StaticResource MouseOver}"/> </Trigger.EnterActions> <Trigger.ExitActions> <BeginStoryboard Storyboard="{StaticResource MouseLeave}"/> </Trigger.ExitActions> </Trigger> <Trigger Property="IsPressed" Value="true"> <Setter TargetName="LayoutRoot" Property="RenderTransform" Value="{StaticResource IndentTransform}"/> </Trigger> <Trigger Property="IsChecked" Value="true"> <Setter TargetName="LayoutRoot" Property="RenderTransform" Value="{StaticResource RotateTransform}"/> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter Property="Foreground" Value="Gray"/> <Setter Property="Opacity" Value="0.5"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> With this in mind, can anyone give input on, in decreasing order of my incompetence in an area: Designing the overall look and feel of the damn thing (I'm no designer, and while I could hack away at this single control for days and potentially get something relatively useful, it's always a gamble). The particular barrier for me here is 'pathing' the tab of the lid, as you will see in the XAML as an element commented out. Should Path be used, or would it be more appropriate to transform a rectangle with rounded corners, or any specific suggestions? Bevelling the individually displayed letter; as detailed above, when the colour of both the foreground and background are the same then this will be invisible if no effects are applied, also for a decent level of realism I'd like to be able to apply such an effect/s. So far use of DropShadow and Balder3DEngine have fulfilled my requirements for graphics in XAML, how achievable is a bevel effect? Rotating the control on mouse-click, that is, showing the opposing face. Is this going to be possible using a style and XAML only for the design? Or is it that ugliness may rear it's head in the form of code-behind to show/hide embedded controls? Should the faces be separate controls and later somehow combined? Allowing the control to size dynamically. I'm supposing I will be able to convert a solid, absolute layout to a nice generic one when I actually have the former in place. Obviously this entails sizing the centralised letter and the lid 'tab', but that's it really, other than keeping the aspect ratio equal (since the ellipses grow nicely with the grid). Any suggestions to approaching this would be greatly appreciated, particularly with a dynamically growing font - I've done that before in a web-imaging scenario using code and System.Drawing, and wouldn't like to approach it in even a similar way. By the way, the reason I specify both WPF and Silverlight is that, from my current knowledge, the inputs being written targeting either of these will be fairly transferable for similar output by the other, albeit not without alterations in either scenario. The resulting application is in fact destined to be written in Silverlight, however, so I don't fancy inviting anything from WPF which will guarantee my only being able to convert 90% of it. I'll go give this little project a start, maybe in Blend(?), hopefully can catch up with some advice shortly. Thanks, Mr. D EDIT: Next question, ought this to be broken up into separate questions? :/

    Read the article

  • how to count checked checkboxes in different divs

    - by KMKMAHESH
    <head><title>STUDENT WISE EXAM BACKLOGS DISPLAY FOR EXAM REGISTRATION</title> <style type="text/css"> th { font-family:Arial; color:black; border:1px solid #000; } thead { display:table-header-group; } tbody { display:table-row-group; } td { border:1px solid #000; } </style> <script type="text/javascript" > function check_value(year,sem){ ysem="ys"+year+sem; var reg=document.registration.regulation.value; subjectsys="subjects"+year+sem; amountsys="amount"+year+sem; if(year==1){ if(sem==1){ var value_list = document.getElementById("ys11").getElementsByTagName('input'); } if(sem==2){ var value_list = document.getElementById("ys12").getElementsByTagName('input'); } }elseif(year==2){ if(sem==1){ var value_list = document.getElementById("ys21").getElementsByTagName('input'); } if(sem==2){ var value_list = document.getElementById("ys22").getElementsByTagName('input'); } }elseif(year==3){ if(sem==1){ var value_list = document.getElementById("ys31").getElementsByTagName('input'); } if(sem==2){ var value_list = document.getElementById("ys32").getElementsByTagName('input'); } }elseif(year==4){ if(sem==1){ var value_list = document.getElementById("ys41").getElementsByTagName('input'); } if(sem==2){ var value_list = document.getElementById("ys42").getElementsByTagName('input'); } } values = 0; for (var i=0; i<value_list.length; i++){ if (value_list[i].checked) { values=values+1; } } document.getElementById(subjectsys).value=values; if (values=="0") { document.getElementById(amountsys).innerHTML=""; return; } if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById(amountsys).innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","fee.php?year="+year+"&reg="+reg+"&sem="+sem+"&sub="+values,true); xmlhttp.send(); } </script> </head> <form id="registration" name="registration" action=subverify.php method=POST></br></br> <center> Backlog Subjects for <b>08KN1A1219</b> </br></br> <table border='1'><tr> <th width='40'>&nbsp;</th><th width='90'>Regulation</th><th width='40'>Year</th> <th width='40'>Sem</th><th width='350'>Subname</th> <th width='70'>Internals</th><th width='70'>Externals</th> </tr><div id="ys41"><tr> <td width='40'><center><input type="checkbox" name="sub[]" value="344" onclick="check_value(4,1)"></center></td> <td width='90'><center>R07</center></td><td width='40'><center>4</center></td><td width='40'><center>1</center></td> <td width='350'>EMBEDDED SYSTEMS</td><td width='70'><center>18</center></td> <td width='70'><center>17</center></td></tr><tr><td colspan=5 align=right><b>Subjects: </b><input size=2 type=textbox id=subjects41 name=subjects41 value=0 maxlength=2 readonly=readonly></td> <td align=right><b>Amount :</b></td> <input type='hidden' name='regulation' id=regulationsubjects41 value='R07'> <td><div id="amount41"><input type="textbox" name="amountval41" value="0" size="5" maxlength="5" readonly="readonly"></div></td></tr></div><div id="ys42"><tr> <td width='40'><center><input type="checkbox" name="sub[]" value="527" onclick="check_value(4,2)"></center></td> <td width='90'><center>R07</center></td><td width='40'><center>4</center></td><td width='40'><center>2</center></td> <td width='350'>DESIGN PATTERNS</td><td width='70'><center>12</center></td> <td width='70'><center>14</center></td></tr><tr><td colspan=5 align=right><b>Subjects: </b><input size=2 type=textbox id=subjects42 name=subjects42 value=0 maxlength=2 readonly=readonly></td> <td align=right><b>Amount :</b></td> <input type='hidden' name='regulation' id=regulationsubjects42 value='R07'> <td><div id="amount42"><input type="textbox" name="amountval42" value="0" size="5" maxlength="5" readonly="readonly"></div></td></tr></div><tr><td colspan=7><center><b><div id="maintotal"><input type="textbox" name="maintotal" value="0" size="5" maxlength="5" readonly="readonly"></div></center></b></td></tr><tr></tr></table></br></br> <center><input type='hidden' name='htno' value='08KN1A1219'> <input type='submit' value='Register'></center></form></br> this is a output of a php file with using dynamic data in the form i want to count only the checkboxes in the div and it has to display in that subjectsdiv like subjects41 and subjects42 can any one please help me to update this javascript it passes some ajax request for displaying the fee

    Read the article

  • SSL / HTTP / No Response to Curl

    - by Alex McHale
    I am trying to send commands to a SOAP service, and getting nothing in reply. The SOAP service is at a completely separate site from either server I am testing with. I have written a dummy script with the SOAP XML embedded. When I run it at my local site, on any of three machines -- OSX, Ubuntu, or CentOS 5.3 -- it completes successfully with a good response. I then sent the script to our public host at Slicehost, where I fail to get the response back from the SOAP service. It accepts the TCP socket and proceeds with the SSL handshake. I do not however receive any valid HTTP response. This is the case whether I use my script or curl on the command line. I have rewritten the script using SOAP4R, Net::HTTP and Curb. All of which work at my local site, none of which work at the Slicehost site. I have tried to assemble the CentOS box as closely to match my Slicehost server as possible. I rebuilt the Slice to be a stock CentOS 5.3 and stock CentOS 5.4 with the same results. When I look at a tcpdump of the bad sessions on Slicehost, I see my script or curl send the XML to the remote server, and nothing comes back. When I look at the tcpdump at my local site, I see the response just fine. I have entirely disabled iptables on the Slice. Does anyone have any ideas what could be causing these results? Please let me know what additional information I can furnish. Thank you! Below is a wire trace of a sample session. The IP that starts with 173 is my server while the IP that starts with 12 is the SOAP server's. No. Time Source Destination Protocol Info 1 0.000000 173.45.x.x 12.36.x.x TCP 36872 > https [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=137633469 TSER=0 WS=6 Frame 1 (74 bytes on wire, 74 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 0, Len: 0 No. Time Source Destination Protocol Info 2 0.040000 12.36.x.x 173.45.x.x TCP https > 36872 [SYN, ACK] Seq=0 Ack=1 Win=8760 Len=0 MSS=1460 Frame 2 (62 bytes on wire, 62 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 0, Ack: 1, Len: 0 No. Time Source Destination Protocol Info 3 0.040000 173.45.x.x 12.36.x.x TCP 36872 > https [ACK] Seq=1 Ack=1 Win=5840 Len=0 Frame 3 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 1, Ack: 1, Len: 0 No. Time Source Destination Protocol Info 4 0.050000 173.45.x.x 12.36.x.x SSLv2 Client Hello Frame 4 (156 bytes on wire, 156 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 1, Ack: 1, Len: 102 Secure Socket Layer No. Time Source Destination Protocol Info 5 0.130000 12.36.x.x 173.45.x.x TCP [TCP segment of a reassembled PDU] Frame 5 (1434 bytes on wire, 1434 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 1, Ack: 103, Len: 1380 Secure Socket Layer No. Time Source Destination Protocol Info 6 0.130000 173.45.x.x 12.36.x.x TCP 36872 > https [ACK] Seq=103 Ack=1381 Win=8280 Len=0 Frame 6 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 103, Ack: 1381, Len: 0 No. Time Source Destination Protocol Info 7 0.130000 12.36.x.x 173.45.x.x TLSv1 Server Hello, Certificate, Server Hello Done Frame 7 (1280 bytes on wire, 1280 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 1381, Ack: 103, Len: 1226 [Reassembled TCP Segments (2606 bytes): #5(1380), #7(1226)] Secure Socket Layer No. Time Source Destination Protocol Info 8 0.130000 173.45.x.x 12.36.x.x TCP 36872 > https [ACK] Seq=103 Ack=2607 Win=11040 Len=0 Frame 8 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 103, Ack: 2607, Len: 0 No. Time Source Destination Protocol Info 9 0.130000 173.45.x.x 12.36.x.x TLSv1 Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message Frame 9 (236 bytes on wire, 236 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 103, Ack: 2607, Len: 182 Secure Socket Layer No. Time Source Destination Protocol Info 10 0.190000 12.36.x.x 173.45.x.x TLSv1 Change Cipher Spec, Encrypted Handshake Message Frame 10 (97 bytes on wire, 97 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 2607, Ack: 285, Len: 43 Secure Socket Layer No. Time Source Destination Protocol Info 11 0.190000 173.45.x.x 12.36.x.x TLSv1 Application Data Frame 11 (347 bytes on wire, 347 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 285, Ack: 2650, Len: 293 Secure Socket Layer No. Time Source Destination Protocol Info 12 0.190000 173.45.x.x 12.36.x.x TCP [TCP segment of a reassembled PDU] Frame 12 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer No. Time Source Destination Protocol Info 13 0.450000 12.36.x.x 173.45.x.x TCP https > 36872 [ACK] Seq=2650 Ack=578 Win=64958 Len=0 Frame 13 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 2650, Ack: 578, Len: 0 No. Time Source Destination Protocol Info 14 0.450000 173.45.x.x 12.36.x.x TCP [TCP segment of a reassembled PDU] Frame 14 (206 bytes on wire, 206 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 2038, Ack: 2650, Len: 152 No. Time Source Destination Protocol Info 15 0.510000 12.36.x.x 173.45.x.x TCP [TCP Dup ACK 13#1] https > 36872 [ACK] Seq=2650 Ack=578 Win=64958 Len=0 Frame 15 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 2650, Ack: 578, Len: 0 No. Time Source Destination Protocol Info 16 0.850000 173.45.x.x 12.36.x.x TCP [TCP Retransmission] [TCP segment of a reassembled PDU] Frame 16 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer No. Time Source Destination Protocol Info 17 1.650000 173.45.x.x 12.36.x.x TCP [TCP Retransmission] [TCP segment of a reassembled PDU] Frame 17 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer No. Time Source Destination Protocol Info 18 3.250000 173.45.x.x 12.36.x.x TCP [TCP Retransmission] [TCP segment of a reassembled PDU] Frame 18 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer No. Time Source Destination Protocol Info 19 6.450000 173.45.x.x 12.36.x.x TCP [TCP Retransmission] [TCP segment of a reassembled PDU] Frame 19 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer

    Read the article

  • Linux Mint 10 LXDE computer to act as LTSP Server without luck

    - by Rautamiekka
    So I've tried to make our screen-broken HP laptop to also serve as LTSP Server in addition to various other tasks, without luck, which may be cuz I'm running LM10 LXDE while the instructions are for Ubuntu. Excuse my ignorance. The entire output from Terminal after installing LTSP stuff along with a Server kernel and a load of other packages: administrator@rauta-mint-turion ~ $ sudo lt ltrace ltsp-build-client ltsp-chroot ltspfs ltspfsmounter ltsp-info ltsp-localapps ltsp-update-image ltsp-update-kernels ltsp-update-sshkeys administrator@rauta-mint-turion ~ $ sudo ltsp-update-sshkeys administrator@rauta-mint-turion ~ $ sudo ltsp-update-kernels find: `/opt/ltsp/': No such file or directory administrator@rauta-mint-turion ~ $ sudo ltsp-build-client /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging: line 3: /opt/ltsp/i386/etc/ltsp_chroot: No such file or directory error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ sudo ltsp-update-image Cannot determine assigned port. Assigning to port 2000. mkdir: cannot create directory `/opt/ltsp/i386/etc/ltsp': No such file or directory /usr/sbin/ltsp-update-image: 274: cannot create /opt/ltsp/i386/etc/ltsp/update-kernels.conf: Directory nonexistent /usr/sbin/ltsp-update-image: 274: cannot create /opt/ltsp/i386/etc/ltsp/update-kernels.conf: Directory nonexistent Regenerating kernel... chroot: failed to run command `/usr/share/ltsp/update-kernels': No such file or directory Done. Configuring inetd... Done. Updating pxelinux default configuration...Done. Skipping invalid chroot: /opt/ltsp/i386 chroot: failed to run command `test': No such file or directory administrator@rauta-mint-turion ~ $ sudo ltsp-chroot chroot: failed to run command `/bin/bash': No such file or directory administrator@rauta-mint-turion ~ $ bash administrator@rauta-mint-turion ~ $ exit exit administrator@rauta-mint-turion ~ $ sudo ls /opt/ltsp i386 administrator@rauta-mint-turion ~ $ sudo ls /opt/ltsp/i386/ administrator@rauta-mint-turion ~ $ sudo ltsp-build-client NOTE: Root directory /opt/ltsp/i386 already exists, this will lead to problems, please remove it before trying again. Exiting. error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ sudo rm -rv /opt/ltsp/i386 removed directory: `/opt/ltsp/i386' administrator@rauta-mint-turion ~ $ sudo ltsp-build-client /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging: line 3: /opt/ltsp/i386/etc/ltsp_chroot: No such file or directory error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ aptitude search ltsp p fts-ltsp-ldap - LDAP LTSP module for the TFTP/Fuse supplicant p ltsp-client - LTSP client environment p ltsp-client-core - LTSP client environment (core) p ltsp-cluster-accountmanager - Account creation and management daemon for LTSP p ltsp-cluster-control - Web based thin-client configuration management p ltsp-cluster-lbagent - LTSP loadbalancer agent offers variables about the state of the ltsp server p ltsp-cluster-lbserver - LTSP loadbalancer server returns the optimal ltsp server to terminal p ltsp-cluster-nxloadbalancer - Minimal NX loadbalancer for ltsp-cluster p ltsp-cluster-pxeconfig - LTSP-Cluster symlink generator p ltsp-controlaula - Classroom management tool with ltsp clients p ltsp-docs - LTSP Documentation p ltsp-livecd - starts an LTSP live server on an Ubuntu livecd session p ltsp-manager - Ubuntu LTSP server management GUI i A ltsp-server - Basic LTSP server environment i ltsp-server-standalone - Complete LTSP server environment i A ltspfs - Fuse based remote filesystem for LTSP thin clients p ltspfsd - Fuse based remote filesystem hooks for LTSP thin clients p ltspfsd-core - Fuse based remote filesystem daemon for LTSP thin clients p python-ltsp - provides ltsp related functions administrator@rauta-mint-turion ~ $ sudo aptitude purge ltsp-server ltsp-server-standalone ltspfs The following packages will be REMOVED: debconf-utils{u} debootstrap{u} dhcp3-server{u} gstreamer0.10-pulseaudio{u} ldm-server{u} libpulse-browse0{u} ltsp-server{p} ltsp-server-standalone{p} ltspfs{p} nbd-server{u} openbsd-inetd{u} pulseaudio{u} pulseaudio-esound-compat{u} pulseaudio-module-x11{u} pulseaudio-utils{u} squashfs-tools{u} tftpd-hpa{u} 0 packages upgraded, 0 newly installed, 17 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 6,996kB will be freed. Do you want to continue? [Y/n/?] (Reading database ... 158454 files and directories currently installed.) Removing ltsp-server-standalone ... Purging configuration files for ltsp-server-standalone ... Removing ltsp-server ... Purging configuration files for ltsp-server ... dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot/ltsp' not empty so not removed. dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot' not empty so not removed. Processing triggers for man-db ... (Reading database ... 158195 files and directories currently installed.) Removing debconf-utils ... Removing debootstrap ... Removing dhcp3-server ... * Stopping DHCP server dhcpd3 [ OK ] Removing gstreamer0.10-pulseaudio ... Removing ldm-server ... Removing pulseaudio-module-x11 ... Removing pulseaudio-esound-compat ... Removing pulseaudio ... * PulseAudio configured for per-user sessions Removing pulseaudio-utils ... Removing libpulse-browse0 ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for libc-bin ... ldconfig deferred processing now taking place (Reading database ... 157944 files and directories currently installed.) Removing ltspfs ... Processing triggers for man-db ... (Reading database ... 157932 files and directories currently installed.) Removing nbd-server ... Stopping Network Block Device server: nbd-server. Removing openbsd-inetd ... * Stopping internet superserver inetd [ OK ] Removing squashfs-tools ... Removing tftpd-hpa ... tftpd-hpa stop/waiting Processing triggers for ureadahead ... Processing triggers for man-db ... administrator@rauta-mint-turion ~ $ sudo aptitude purge ~c The following packages will be REMOVED: dhcp3-server{p} libpulse-browse0{p} nbd-server{p} openbsd-inetd{p} pulseaudio{p} tftpd-hpa{p} 0 packages upgraded, 0 newly installed, 6 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Do you want to continue? [Y/n/?] (Reading database ... 157881 files and directories currently installed.) Removing dhcp3-server ... Purging configuration files for dhcp3-server ... Removing libpulse-browse0 ... Purging configuration files for libpulse-browse0 ... Removing nbd-server ... Purging configuration files for nbd-server ... Removing openbsd-inetd ... Purging configuration files for openbsd-inetd ... Removing pulseaudio ... Purging configuration files for pulseaudio ... Removing tftpd-hpa ... Purging configuration files for tftpd-hpa ... Processing triggers for ureadahead ... administrator@rauta-mint-turion ~ $ sudo aptitude install ltsp-server-standalone The following NEW packages will be installed: debconf-utils{a} debootstrap{a} ldm-server{a} ltsp-server{a} ltsp-server-standalone ltspfs{a} nbd-server{a} openbsd-inetd{a} squashfs-tools{a} The following packages are RECOMMENDED but will NOT be installed: dhcp3-server pulseaudio-esound-compat tftpd-hpa 0 packages upgraded, 9 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/498kB of archives. After unpacking 2,437kB will be used. Do you want to continue? [Y/n/?] Preconfiguring packages ... Selecting previously deselected package openbsd-inetd. (Reading database ... 157868 files and directories currently installed.) Unpacking openbsd-inetd (from .../openbsd-inetd_0.20080125-4ubuntu2_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... Setting up openbsd-inetd (0.20080125-4ubuntu2) ... * Stopping internet superserver inetd [ OK ] * Starting internet superserver inetd [ OK ] Selecting previously deselected package ldm-server. (Reading database ... 157877 files and directories currently installed.) Unpacking ldm-server (from .../ldm-server_2%3a2.1.3-0ubuntu1_all.deb) ... Selecting previously deselected package debconf-utils. Unpacking debconf-utils (from .../debconf-utils_1.5.32ubuntu3_all.deb) ... Selecting previously deselected package debootstrap. Unpacking debootstrap (from .../debootstrap_1.0.23ubuntu1_all.deb) ... Selecting previously deselected package nbd-server. Unpacking nbd-server (from .../nbd-server_1%3a2.9.14-2ubuntu1_i386.deb) ... Selecting previously deselected package squashfs-tools. Unpacking squashfs-tools (from .../squashfs-tools_1%3a4.0-8_i386.deb) ... Selecting previously deselected package ltsp-server. GNU nano 2.2.4 File: /etc/ltsp/ltsp-update-image.conf # Configuration file for ltsp-update-image # By default, do not compress the image # as it's reported to make it unstable NO_COMP="-noF -noD -noI -no-exports" [ Switched to /etc/ltsp/ltsp-update-image.conf ] administrator@rauta-mint-turion ~ $ ls /opt/ firefox/ ltsp/ mint-flashplugin/ administrator@rauta-mint-turion ~ $ ls /opt/ltsp/i386/ administrator@rauta-mint-turion ~ $ ls /opt/ltsp/ i386 administrator@rauta-mint-turion ~ $ sudo ltsp ltsp-build-client ltsp-chroot ltspfs ltspfsmounter ltsp-info ltsp-localapps ltsp-update-image ltsp-update-kernels ltsp-update-sshkeys administrator@rauta-mint-turion ~ $ sudo ltsp-build-client NOTE: Root directory /opt/ltsp/i386 already exists, this will lead to problems, please remove it before trying again. Exiting. error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ ^C administrator@rauta-mint-turion ~ $ sudo aptitude purge ltsp ltspfs ltsp-server ltsp-server-standalone administrator@rauta-mint-turion ~ $ sudo aptitude purge ltsp-server The following packages will be REMOVED: ltsp-server{p} 0 packages upgraded, 0 newly installed, 1 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 1,073kB will be freed. The following packages have unmet dependencies: ltsp-server-standalone: Depends: ltsp-server but it is not going to be installed. The following actions will resolve these dependencies: Remove the following packages: 1) ltsp-server-standalone Accept this solution? [Y/n/q/?] The following packages will be REMOVED: debconf-utils{u} debootstrap{u} ldm-server{u} ltsp-server{p} ltsp-server-standalone{a} ltspfs{u} nbd-server{u} openbsd-inetd{u} squashfs-tools{u} 0 packages upgraded, 0 newly installed, 9 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 2,437kB will be freed. Do you want to continue? [Y/n/?] (Reading database ... 158244 files and directories currently installed.) Removing ltsp-server-standalone ... (Reading database ... 158240 files and directories currently installed.) Removing ltsp-server ... Purging configuration files for ltsp-server ... dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot/ltsp' not empty so not removed. dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot' not empty so not removed. Processing triggers for man-db ... (Reading database ... 157987 files and directories currently installed.) Removing debconf-utils ... Removing debootstrap ... Removing ldm-server ... Removing ltspfs ... Removing nbd-server ... Stopping Network Block Device server: nbd-server. Removing openbsd-inetd ... * Stopping internet superserver inetd [ OK ] Removing squashfs-tools ... Processing triggers for man-db ... Processing triggers for ureadahead ... administrator@rauta-mint-turion ~ $ sudo aptitude purge ~c The following packages will be REMOVED: ltsp-server-standalone{p} nbd-server{p} openbsd-inetd{p} 0 packages upgraded, 0 newly installed, 3 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Do you want to continue? [Y/n/?] (Reading database ... 157871 files and directories currently installed.) Removing ltsp-server-standalone ... Purging configuration files for ltsp-server-standalone ... Removing nbd-server ... Purging configuration files for nbd-server ... Removing openbsd-inetd ... Purging configuration files for openbsd-inetd ... Processing triggers for ureadahead ... administrator@rauta-mint-turion ~ $ sudo rm -rv /var/lib/t teamspeak-server/ tftpboot/ transmission-daemon/ administrator@rauta-mint-turion ~ $ sudo rm -rv /var/lib/tftpboot removed `/var/lib/tftpboot/ltsp/i386/pxelinux.cfg/default' removed directory: `/var/lib/tftpboot/ltsp/i386/pxelinux.cfg' removed directory: `/var/lib/tftpboot/ltsp/i386' removed directory: `/var/lib/tftpboot/ltsp' removed directory: `/var/lib/tftpboot' administrator@rauta-mint-turion ~ $ sudo find / -name "ltsp" /opt/ltsp administrator@rauta-mint-turion ~ $ sudo rm -rv /opt/ltsp removed directory: `/opt/ltsp/i386' removed directory: `/opt/ltsp' administrator@rauta-mint-turion ~ $ sudo aptitude install ltsp-server-standalone The following NEW packages will be installed: debconf-utils{a} debootstrap{a} ldm-server{a} ltsp-server{a} ltsp-server-standalone ltspfs{a} nbd-server{a} openbsd-inetd{a} squashfs-tools{a} The following packages are RECOMMENDED but will NOT be installed: dhcp3-server pulseaudio-esound-compat tftpd-hpa 0 packages upgraded, 9 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/498kB of archives. After unpacking 2,437kB will be used. Do you want to continue? [Y/n/?] Preconfiguring packages ... Selecting previously deselected package openbsd-inetd. (Reading database ... 157868 files and directories currently installed.) Unpacking openbsd-inetd (from .../openbsd-inetd_0.20080125-4ubuntu2_i386.deb) ... Processing triggers for man-db ... GNU nano 2.2.4 New Buffer GNU nano 2.2.4 File: /etc/ltsp/dhcpd.conf # # Default LTSP dhcpd.conf config file. # authoritative; subnet 192.168.2.0 netmask 255.255.255.0 { range 192.168.2.70 192.168.2.79; option domain-name "jarvinen"; option domain-name-servers 192.168.2.254; option broadcast-address 192.168.2.255; option routers 192.168.2.254; # next-server 192.168.0.1; # get-lease-hostnames true; option subnet-mask 255.255.255.0; option root-path "/opt/ltsp/i386"; if substring( option vendor-class-identifier, 0, 9 ) = "PXEClient" { filename "/ltsp/i386/pxelinux.0"; } else { filename "/ltsp/i386/nbi.img"; } } [ Wrote 22 lines ] administrator@rauta-mint-turion ~ $ sudo service dhcp3-server start * Starting DHCP server dhcpd3 [ OK ] administrator@rauta-mint-turion ~ $ sudo ltsp-build-client /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging: line 3: /opt/ltsp/i386/etc/ltsp_chroot: No such file or directory error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ sudo cat /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging case "$MODE" in after-install) echo LTSP_CHROOT=$ROOT >> $ROOT/etc/ltsp_chroot ;; esac administrator@rauta-mint-turion ~ $ cd $ROOT/etc/ltsp_chroot bash: cd: /etc/ltsp_chroot: No such file or directory administrator@rauta-mint-turion ~ $ cd $ROOT/etc/ administrator@rauta-mint-turion /etc $ ls acpi chatscripts emacs group insserv.conf.d logrotate.conf mysql php5 rc6.d smbnetfs.conf UPower adduser.conf ConsoleKit environment group- iproute2 logrotate.d nanorc phpmyadmin rc.local snmp usb_modeswitch.conf alternatives console-setup esound grub.d issue lsb-base nbd-server pm rcS.d sound usb_modeswitch.d anacrontab cron.d firefox gshadow issue.net lsb-base-logging.sh ndiswrapper pnm2ppa.conf request-key.conf ssh ushare.conf apache2 cron.daily firefox-3.5 gshadow- java lsb-release netscsid.conf polkit-1 resolvconf ssl vga apm cron.hourly firestarter gtk-2.0 java-6-sun ltrace.conf network popularity-contest.conf resolv.conf sudoers vim apparmor cron.monthly fonts gtkmathview kbd ltsp NetworkManager ppp rmt sudoers.d vlc apparmor.d crontab foomatic gufw kernel lxdm networks printcap rpc su-to-rootrc vsftpd.conf apport cron.weekly fstab hal kernel-img.conf magic nsswitch.conf profile rsyslog.conf sweeprc w3m apt crypttab ftpusers hdparm.conf kerneloops.conf magic.mime ntp.conf profile.d rsyslog.d sysctl.conf wgetrc at.deny cups fuse.conf host.conf kompozer mailcap obex-data-server protocols samba sysctl.d wildmidi auto-apt dbconfig-common gai.conf hostname ldap mailcap.order ODBCDataSources psiconv sane.d teamspeak-server wodim.conf avahi dbus-1 gamin hosts ld.so.cache manpath.config odbc.ini pulse screenrc terminfo wpa_supplicant bash.bashrc debconf.conf gconf hosts.allow ld.so.conf menu openal purple securetty thunderbird X11 bash_completion debian_version gdb hosts.deny ld.so.conf.d menu-methods openoffice python security timezone xdg bash_completion.d default gdm hp legal mime.types opt python2.6 sensors3.conf transmission-daemon xml bindresvport.blacklist defoma ghostscript ifplugd lftp.conf mke2fs.conf pam.conf python2.7 sensors.d ts.conf xulrunner-1.9.2 blkid.conf deluser.conf gimp inetd.conf libpaper.d modprobe.d pam.d python3.1 services ucf.conf zsh_command_not_found blkid.tab depmod.d gnome init libreoffice modules pango rc0.d sgml udev bluetooth dhcp gnome-system-tools init.d linuxmint motd papersize rc1.d shadow ufw bonobo-activation dhcp3 gnome-vfs-2.0 initramfs-tools locale.alias mplayer passwd rc2.d shadow- updatedb.conf ca-certificates dictionaries-common gnome-vfs-mime-magic inputrc localtime mtab passwd- rc3.d shells update-manager ca-certificates.conf doc-base gre.d insserv logcheck mtab.fuselock pcmcia rc4.d skel update-motd.d calendar dpkg groff insserv.conf login.defs mtools.conf perl rc5.d smb2www update-notifier administrator@rauta-mint-turion /etc $ cd ltsp/ administrator@rauta-mint-turion /etc/ltsp $ ls dhcpd.conf ltsp-update-image.conf administrator@rauta-mint-turion /etc/ltsp $ cat dhcpd.conf # # Default LTSP dhcpd.conf config file. # authoritative; subnet 192.168.2.0 netmask 255.255.255.0 { range 192.168.2.70 192.168.2.79; option domain-name "jarvinen"; option domain-name-servers 192.168.2.254; option broadcast-address 192.168.2.255; option routers 192.168.2.254; # next-server 192.168.0.1; # get-lease-hostnames true; option subnet-mask 255.255.255.0; option root-path "/opt/ltsp/i386"; if substring( option vendor-class-identifier, 0, 9 ) = "PXEClient" { filename "/ltsp/i386/pxelinux.0"; } else { filename "/ltsp/i386/nbi.img"; } } administrator@rauta-mint-turion /etc/ltsp $

    Read the article

  • Finding out why Dell Controler is Degraded

    - by Kyle Brandt
    I installed open manage on a couple of my PE 2950s for snmp monitoring of the RAID. All the checks seem to come back okay except for controllerState: [root@aMachine ~]# snmpwalk -v 2c -c bestNotToPostPasswords myMachine -m +StorageManagement-MIB controllerstate StorageManagement-MIB::controllerState.1 = INTEGER: degraded(6) Other checks seems to indicate the battery, LD, and physicals disks are all good unless I missing something. Can anyone tell if I am missing something or neglecting something import in my RAID monitoring/understanding? I get degraded for both these servers I have set up. A walk of the entire storage management tree for on of them: StorageManagement-MIB::softwareVersion.0 = STRING: "3.2.0" StorageManagement-MIB::globalStatus.0 = INTEGER: warning(2) StorageManagement-MIB::softwareManufacturer.0 = STRING: "Dell Inc." StorageManagement-MIB::softwareProduct.0 = STRING: "Server Administrator (Storage Management)" StorageManagement-MIB::softwareDescription.0 = STRING: "Configuration and monitoring of disk storage devices." StorageManagement-MIB::displayName.0 = STRING: "Server Administrator (Storage Management)" StorageManagement-MIB::description.0 = STRING: "Configuration and monitoring of disk storage devices." StorageManagement-MIB::agentVendor.0 = STRING: "Dell Inc." StorageManagement-MIB::agentTimeStamp.0 = INTEGER: 1273842310 StorageManagement-MIB::agentGetTimeout.0 = INTEGER: 5 StorageManagement-MIB::agentModifiers.0 = INTEGER: 0 StorageManagement-MIB::agentRefreshRate.0 = INTEGER: 300 StorageManagement-MIB::agentMibVersion.0 = STRING: "3.2" StorageManagement-MIB::agentManagementSoftwareURLName.0 = "" StorageManagement-MIB::agentGlobalSystemStatus.0 = INTEGER: nonCritical(4) StorageManagement-MIB::agentLastGlobalSystemStatus.0 = INTEGER: ok(3) StorageManagement-MIB::agentSmartThermalShutdown.0 = INTEGER: notApplicable(3) StorageManagement-MIB::controllerNumber.1 = INTEGER: 1 StorageManagement-MIB::controllerName.1 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::controllerVendor.1 = STRING: "DELL" StorageManagement-MIB::controllerType.1 = INTEGER: sas(6) StorageManagement-MIB::controllerState.1 = INTEGER: degraded(6) StorageManagement-MIB::controllerRebuildRateInPercent.1 = INTEGER: 30 StorageManagement-MIB::controllerFWVersion.1 = STRING: "5.0.2-0003" StorageManagement-MIB::controllerCacheSizeInMB.1 = INTEGER: 256 StorageManagement-MIB::controllerCacheSizeInBytes.1 = INTEGER: 0 StorageManagement-MIB::controllerPhysicalDeviceCount.1 = INTEGER: 5 StorageManagement-MIB::controllerLogicalDeviceCount.1 = INTEGER: 1 StorageManagement-MIB::controllerRollUpStatus.1 = INTEGER: nonCritical(4) StorageManagement-MIB::controllerComponentStatus.1 = INTEGER: nonCritical(4) StorageManagement-MIB::controllerNexusID.1 = STRING: "\\0" StorageManagement-MIB::controllerAlarmState.1 = INTEGER: disabled(2) StorageManagement-MIB::controllerDriverVersion.1 = STRING: "00.00.03.05 " StorageManagement-MIB::controllerPCISlot.1 = STRING: "embedded" StorageManagement-MIB::controllerClusterMode.1 = INTEGER: notApplicable(99) StorageManagement-MIB::controllerMinFWVersion.1 = STRING: "5.2.1-0067" StorageManagement-MIB::controllerMinDriverVersion.1 = STRING: "00.00.03.21" StorageManagement-MIB::controllerChannelCount.1 = INTEGER: 2 StorageManagement-MIB::controllerReconstructRate.1 = INTEGER: 30 StorageManagement-MIB::controllerPatrolReadRate.1 = INTEGER: 30 StorageManagement-MIB::controllerBGIRate.1 = INTEGER: 30 StorageManagement-MIB::controllerCheckConsistencyRate.1 = INTEGER: 30 StorageManagement-MIB::controllerPatrolReadMode.1 = INTEGER: automatic(1) StorageManagement-MIB::controllerPatrolReadState.1 = INTEGER: stopped(1) StorageManagement-MIB::controllerPatrolReadIterations.1 = INTEGER: 162 StorageManagement-MIB::controllerEntry.57.1 = INTEGER: 99 StorageManagement-MIB::controllerEntry.58.1 = INTEGER: 99 StorageManagement-MIB::channelNumber.1 = INTEGER: 1 StorageManagement-MIB::channelNumber.2 = INTEGER: 2 StorageManagement-MIB::channelName.1 = STRING: "Connector 0" StorageManagement-MIB::channelName.2 = STRING: "Connector 1" StorageManagement-MIB::channelState.1 = INTEGER: ready(1) StorageManagement-MIB::channelState.2 = INTEGER: ready(1) StorageManagement-MIB::channelRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::channelRollUpStatus.2 = INTEGER: ok(3) StorageManagement-MIB::channelComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::channelComponentStatus.2 = INTEGER: ok(3) StorageManagement-MIB::channelNexusID.1 = STRING: "\\0\\0" StorageManagement-MIB::channelNexusID.2 = STRING: "\\0\\1" StorageManagement-MIB::channelBusType.1 = INTEGER: sas(8) StorageManagement-MIB::channelBusType.2 = INTEGER: sas(8) StorageManagement-MIB::enclosureNumber.1 = INTEGER: 1 StorageManagement-MIB::enclosureName.1 = STRING: "Backplane" StorageManagement-MIB::enclosureVendor.1 = STRING: "DELL" StorageManagement-MIB::enclosureState.1 = INTEGER: ready(1) StorageManagement-MIB::enclosureProductID.1 = STRING: "BACKPLANE " StorageManagement-MIB::enclosureType.1 = INTEGER: internal(1) StorageManagement-MIB::enclosureChannelNumber.1 = INTEGER: 0 StorageManagement-MIB::enclosureRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::enclosureComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::enclosureNexusID.1 = STRING: "\\0\\0\\0" StorageManagement-MIB::enclosureFirmwareVersion.1 = STRING: "1.00" StorageManagement-MIB::enclosureSASAddress.1 = STRING: "50019090B4C67200" StorageManagement-MIB::arrayDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskName.1 = STRING: "Physical Disk 0:0:0" StorageManagement-MIB::arrayDiskName.2 = STRING: "Physical Disk 0:0:1" StorageManagement-MIB::arrayDiskName.3 = STRING: "Physical Disk 0:0:2" StorageManagement-MIB::arrayDiskName.4 = STRING: "Physical Disk 0:0:3" StorageManagement-MIB::arrayDiskVendor.1 = STRING: "DELL " StorageManagement-MIB::arrayDiskVendor.2 = STRING: "DELL " StorageManagement-MIB::arrayDiskVendor.3 = STRING: "DELL " StorageManagement-MIB::arrayDiskVendor.4 = STRING: "DELL " StorageManagement-MIB::arrayDiskState.1 = INTEGER: online(3) StorageManagement-MIB::arrayDiskState.2 = INTEGER: online(3) StorageManagement-MIB::arrayDiskState.3 = INTEGER: online(3) StorageManagement-MIB::arrayDiskState.4 = INTEGER: online(3) StorageManagement-MIB::arrayDiskProductID.1 = STRING: "ST3146755SS " StorageManagement-MIB::arrayDiskProductID.2 = STRING: "ST3146755SS " StorageManagement-MIB::arrayDiskProductID.3 = STRING: "ST3146755SS " StorageManagement-MIB::arrayDiskProductID.4 = STRING: "ST3146755SS " StorageManagement-MIB::arrayDiskSerialNo.1 = STRING: "3LN0LRL0 " StorageManagement-MIB::arrayDiskSerialNo.2 = STRING: "3LN0JYJS " StorageManagement-MIB::arrayDiskSerialNo.3 = STRING: "3LN0LR0V " StorageManagement-MIB::arrayDiskSerialNo.4 = STRING: "3LN0JH97 " StorageManagement-MIB::arrayDiskRevision.1 = STRING: "T106" StorageManagement-MIB::arrayDiskRevision.2 = STRING: "T106" StorageManagement-MIB::arrayDiskRevision.3 = STRING: "T106" StorageManagement-MIB::arrayDiskRevision.4 = STRING: "T106" StorageManagement-MIB::arrayDiskEnclosureID.1 = STRING: "0" StorageManagement-MIB::arrayDiskEnclosureID.2 = STRING: "0" StorageManagement-MIB::arrayDiskEnclosureID.3 = STRING: "0" StorageManagement-MIB::arrayDiskEnclosureID.4 = STRING: "0" StorageManagement-MIB::arrayDiskChannel.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskChannel.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskChannel.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskChannel.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskLengthInMB.1 = INTEGER: 139392 StorageManagement-MIB::arrayDiskLengthInMB.2 = INTEGER: 139392 StorageManagement-MIB::arrayDiskLengthInMB.3 = INTEGER: 139392 StorageManagement-MIB::arrayDiskLengthInMB.4 = INTEGER: 139392 StorageManagement-MIB::arrayDiskLengthInBytes.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLengthInBytes.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskLengthInBytes.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskLengthInBytes.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInMB.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInMB.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInMB.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInMB.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInBytes.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInBytes.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInBytes.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInBytes.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskTargetID.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskTargetID.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskTargetID.3 = INTEGER: 2 StorageManagement-MIB::arrayDiskTargetID.4 = INTEGER: 3 StorageManagement-MIB::arrayDiskLunID.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLunID.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskLunID.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskLunID.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskUsedSpaceInMB.1 = INTEGER: 139392 StorageManagement-MIB::arrayDiskUsedSpaceInMB.2 = INTEGER: 139392 StorageManagement-MIB::arrayDiskUsedSpaceInMB.3 = INTEGER: 139392 StorageManagement-MIB::arrayDiskUsedSpaceInMB.4 = INTEGER: 139392 StorageManagement-MIB::arrayDiskUsedSpaceInBytes.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskUsedSpaceInBytes.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskUsedSpaceInBytes.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskUsedSpaceInBytes.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInMB.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInMB.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInMB.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInMB.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInBytes.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInBytes.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInBytes.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInBytes.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskBusType.1 = INTEGER: sas(8) StorageManagement-MIB::arrayDiskBusType.2 = INTEGER: sas(8) StorageManagement-MIB::arrayDiskBusType.3 = INTEGER: sas(8) StorageManagement-MIB::arrayDiskBusType.4 = INTEGER: sas(8) StorageManagement-MIB::arrayDiskSpareState.1 = INTEGER: notASpare(5) StorageManagement-MIB::arrayDiskSpareState.2 = INTEGER: notASpare(5) StorageManagement-MIB::arrayDiskSpareState.3 = INTEGER: notASpare(5) StorageManagement-MIB::arrayDiskSpareState.4 = INTEGER: notASpare(5) StorageManagement-MIB::arrayDiskRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskRollUpStatus.2 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskRollUpStatus.3 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskRollUpStatus.4 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskComponentStatus.2 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskComponentStatus.3 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskComponentStatus.4 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskNexusID.1 = STRING: "\\0\\0\\0\\0" StorageManagement-MIB::arrayDiskNexusID.2 = STRING: "\\0\\0\\0\\1" StorageManagement-MIB::arrayDiskNexusID.3 = STRING: "\\0\\0\\0\\2" StorageManagement-MIB::arrayDiskNexusID.4 = STRING: "\\0\\0\\0\\3" StorageManagement-MIB::arrayDiskPartNumber.1 = STRING: "SG0DR2381253172FLRL0A00 " StorageManagement-MIB::arrayDiskPartNumber.2 = STRING: "SG0DR2381253172FJYJSA00 " StorageManagement-MIB::arrayDiskPartNumber.3 = STRING: "SG0DR2381253172FLR0VA00 " StorageManagement-MIB::arrayDiskPartNumber.4 = STRING: "SG0DR2381253172FJH97A00 " StorageManagement-MIB::arrayDiskSASAddress.1 = STRING: "5000C50002380201" StorageManagement-MIB::arrayDiskSASAddress.2 = STRING: "5000C50002385B89" StorageManagement-MIB::arrayDiskSASAddress.3 = STRING: "5000C50002385AA9" StorageManagement-MIB::arrayDiskSASAddress.4 = STRING: "5000C500023841E1" StorageManagement-MIB::arrayDiskSmartAlertIndication.1 = INTEGER: no(1) StorageManagement-MIB::arrayDiskSmartAlertIndication.2 = INTEGER: no(1) StorageManagement-MIB::arrayDiskSmartAlertIndication.3 = INTEGER: no(1) StorageManagement-MIB::arrayDiskSmartAlertIndication.4 = INTEGER: no(1) StorageManagement-MIB::arrayDiskManufactureDay.1 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureDay.2 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureDay.3 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureDay.4 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureWeek.1 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureWeek.2 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureWeek.3 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureWeek.4 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureYear.1 = STRING: "2005" StorageManagement-MIB::arrayDiskManufactureYear.2 = STRING: "2005" StorageManagement-MIB::arrayDiskManufactureYear.3 = STRING: "2005" StorageManagement-MIB::arrayDiskManufactureYear.4 = STRING: "2005" StorageManagement-MIB::arrayDiskMediaType.1 = INTEGER: hdd(2) StorageManagement-MIB::arrayDiskMediaType.2 = INTEGER: hdd(2) StorageManagement-MIB::arrayDiskMediaType.3 = INTEGER: hdd(2) StorageManagement-MIB::arrayDiskMediaType.4 = INTEGER: hdd(2) StorageManagement-MIB::arrayDiskEntry.36.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEntry.36.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskEntry.36.3 = INTEGER: 1 StorageManagement-MIB::arrayDiskEntry.36.4 = INTEGER: 1 StorageManagement-MIB::arrayDiskEntry.40.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.40.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.40.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.40.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.41.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.41.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.41.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.41.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskEnclosureConnectionNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskEnclosureConnectionNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskEnclosureConnectionNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskName.1 = STRING: "Physical Disk 0:0:0" StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskName.2 = STRING: "Physical Disk 0:0:1" StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskName.3 = STRING: "Physical Disk 0:0:2" StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskName.4 = STRING: "Physical Disk 0:0:3" StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureName.1 = STRING: "Backplane" StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureName.2 = STRING: "Backplane" StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureName.3 = STRING: "Backplane" StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureName.4 = STRING: "Backplane" StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureNumber.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureNumber.3 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureNumber.4 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionControllerName.1 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::arrayDiskEnclosureConnectionControllerName.2 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::arrayDiskEnclosureConnectionControllerName.3 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::arrayDiskEnclosureConnectionControllerName.4 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::arrayDiskEnclosureConnectionControllerNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionControllerNumber.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionControllerNumber.3 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionControllerNumber.4 = INTEGER: 1 StorageManagement-MIB::batteryNumber.1 = INTEGER: 1 StorageManagement-MIB::batteryName.1 = STRING: "Battery 0" StorageManagement-MIB::batteryVendor.1 = STRING: "DELL" StorageManagement-MIB::batteryState.1 = INTEGER: ready(1) StorageManagement-MIB::batteryRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::batteryComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::batteryNexusID.1 = STRING: "\\0\\0" StorageManagement-MIB::batteryPredictedCapacity.1 = INTEGER: ready(2) StorageManagement-MIB::batteryNextLearnTime.1 = INTEGER: 21 StorageManagement-MIB::batteryLearnState.1 = INTEGER: idle(16) StorageManagement-MIB::batteryEntry.13.1 = INTEGER: 0 StorageManagement-MIB::batteryMaxLearnDelay.1 = INTEGER: 168 StorageManagement-MIB::batteryConnectionNumber.1 = INTEGER: 1 StorageManagement-MIB::batteryConnectionBatteryName.1 = STRING: "Battery 0" StorageManagement-MIB::batteryConnectionBatteryNumber.1 = INTEGER: 1 StorageManagement-MIB::batteryConnectionControllerName.1 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::batteryConnectionControllerNumber.1 = INTEGER: 1 StorageManagement-MIB::virtualDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::virtualDiskName.1 = STRING: "Virtual Disk 0" StorageManagement-MIB::virtualDiskDeviceName.1 = STRING: "/dev/sda" StorageManagement-MIB::virtualDiskState.1 = INTEGER: ready(1) StorageManagement-MIB::virtualDiskLengthInMB.1 = INTEGER: 278784 StorageManagement-MIB::virtualDiskLengthInBytes.1 = INTEGER: 0 StorageManagement-MIB::virtualDiskWritePolicy.1 = INTEGER: writeBack(3) StorageManagement-MIB::virtualDiskReadPolicy.1 = INTEGER: noReadAhead(5) StorageManagement-MIB::virtualDiskCachePolicy.1 = INTEGER: not-applicable(99) StorageManagement-MIB::virtualDiskLayout.1 = INTEGER: raid-10(10) StorageManagement-MIB::virtualDiskCurStripeSizeInMB.1 = INTEGER: 0 StorageManagement-MIB::virtualDiskCurStripeSizeInBytes.1 = INTEGER: 65536 StorageManagement-MIB::virtualDiskTargetID.1 = INTEGER: 0 StorageManagement-MIB::virtualDiskRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::virtualDiskComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::virtualDiskNexusID.1 = STRING: "\\0\\0" StorageManagement-MIB::virtualDiskArrayDiskType.1 = INTEGER: sas(1) StorageManagement-MIB::virtualDiskEntry.23.1 = INTEGER: 2 StorageManagement-MIB::virtualDiskEntry.24.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLogicalConnectionNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskLogicalConnectionNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskLogicalConnectionNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskName.1 = STRING: "Physical Disk 0:0:0" StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskName.2 = STRING: "Physical Disk 0:0:1" StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskName.3 = STRING: "Physical Disk 0:0:2" StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskName.4 = STRING: "Physical Disk 0:0:3" StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskName.1 = STRING: "Virtual Disk 0" StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskName.2 = STRING: "Virtual Disk 0" StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskName.3 = STRING: "Virtual Disk 0" StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskName.4 = STRING: "Virtual Disk 0" StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskNumber.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskNumber.3 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskNumber.4 = INTEGER: 1

    Read the article

  • Cannot open root device xvda1 or unknown-block(0,0)

    - by svoop
    I'm putting together a Dom0 and three DomU (all Gentoo) with kernel 3.5.7 and Xen 4.1.1. Each Dom has it's own md (md0 for Dom0, md1 for Dom1 etc). Dom0 works fine so far, however, I'm stuck trying to create DomUs. It appears the xvda1 device on DomU is not created or accessible: Parsing config file dom1 domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3", features="(null)" domainbuilder: detail: xc_dom_kernel_mem: called domainbuilder: detail: xc_dom_boot_xen_init: ver 4.1, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 domainbuilder: detail: xc_dom_parse_image: called domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... domainbuilder: detail: loader probe failed domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ... domainbuilder: detail: xc_dom_malloc : 10530 kB domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x2f7a4f -> 0xa48888 domainbuilder: detail: loader probe OK xc: detail: elf_parse_binary: phdr: paddr=0x1000000 memsz=0x558000 xc: detail: elf_parse_binary: phdr: paddr=0x1558000 memsz=0x690e8 xc: detail: elf_parse_binary: phdr: paddr=0x15c2000 memsz=0x127c0 xc: detail: elf_parse_binary: phdr: paddr=0x15d5000 memsz=0x533000 xc: detail: elf_parse_binary: memory: 0x1000000 -> 0x1b08000 xc: detail: elf_xen_parse_note: GUEST_OS = "linux" xc: detail: elf_xen_parse_note: GUEST_VERSION = "2.6" xc: detail: elf_xen_parse_note: XEN_VERSION = "xen-3.0" xc: detail: elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 xc: detail: elf_xen_parse_note: ENTRY = 0xffffffff815d5210 xc: detail: elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000 xc: detail: elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" xc: detail: elf_xen_parse_note: PAE_MODE = "yes" xc: detail: elf_xen_parse_note: LOADER = "generic" xc: detail: elf_xen_parse_note: unknown xen elf note (0xd) xc: detail: elf_xen_parse_note: SUSPEND_CANCEL = 0x1 xc: detail: elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 xc: detail: elf_xen_parse_note: PADDR_OFFSET = 0x0 xc: detail: elf_xen_addr_calc_check: addresses: xc: detail: virt_base = 0xffffffff80000000 xc: detail: elf_paddr_offset = 0x0 xc: detail: virt_offset = 0xffffffff80000000 xc: detail: virt_kstart = 0xffffffff81000000 xc: detail: virt_kend = 0xffffffff81b08000 xc: detail: virt_entry = 0xffffffff815d5210 xc: detail: p2m_base = 0xffffffffffffffff domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff81000000 -> 0xffffffff81b08000 domainbuilder: detail: xc_dom_mem_init: mem 5000 MB, pages 0x138800 pages, 4k each domainbuilder: detail: xc_dom_mem_init: 0x138800 pages domainbuilder: detail: xc_dom_boot_mem_init: called domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64 domainbuilder: detail: xc_dom_malloc : 10000 kB domainbuilder: detail: xc_dom_build_image: called domainbuilder: detail: xc_dom_alloc_segment: kernel : 0xffffffff81000000 -> 0xffffffff81b08000 (pfn 0x1000 + 0xb08 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xb08 at 0x7fdec9b85000 xc: detail: elf_load_binary: phdr 0 at 0x0x7fdec9b85000 -> 0x0x7fdeca0dd000 xc: detail: elf_load_binary: phdr 1 at 0x0x7fdeca0dd000 -> 0x0x7fdeca1460e8 xc: detail: elf_load_binary: phdr 2 at 0x0x7fdeca147000 -> 0x0x7fdeca1597c0 xc: detail: elf_load_binary: phdr 3 at 0x0x7fdeca15a000 -> 0x0x7fdeca1cd000 domainbuilder: detail: xc_dom_alloc_segment: phys2mach : 0xffffffff81b08000 -> 0xffffffff824cc000 (pfn 0x1b08 + 0x9c4 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1b08+0x9c4 at 0x7fdec91c1000 domainbuilder: detail: xc_dom_alloc_page : start info : 0xffffffff824cc000 (pfn 0x24cc) domainbuilder: detail: xc_dom_alloc_page : xenstore : 0xffffffff824cd000 (pfn 0x24cd) domainbuilder: detail: xc_dom_alloc_page : console : 0xffffffff824ce000 (pfn 0x24ce) domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff827fffff, 20 table(s) domainbuilder: detail: xc_dom_alloc_segment: page tables : 0xffffffff824cf000 -> 0xffffffff824e6000 (pfn 0x24cf + 0x17 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cf+0x17 at 0x7fdece676000 domainbuilder: detail: xc_dom_alloc_page : boot stack : 0xffffffff824e6000 (pfn 0x24e6) domainbuilder: detail: xc_dom_build_image : virt_alloc_end : 0xffffffff824e7000 domainbuilder: detail: xc_dom_build_image : virt_pgtab_end : 0xffffffff82800000 domainbuilder: detail: xc_dom_boot_image: called domainbuilder: detail: arch_setup_bootearly: doing nothing domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64 domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x138800 domainbuilder: detail: clear_page: pfn 0x24ce, mfn 0x37ddee domainbuilder: detail: clear_page: pfn 0x24cd, mfn 0x37ddef domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cc+0x1 at 0x7fdece675000 domainbuilder: detail: start_info_x86_64: called domainbuilder: detail: setup_hypercall_page: vaddr=0xffffffff81001000 pfn=0x1001 domainbuilder: detail: domain builder memory footprint domainbuilder: detail: allocated domainbuilder: detail: malloc : 20658 kB domainbuilder: detail: anon mmap : 0 bytes domainbuilder: detail: mapped domainbuilder: detail: file mmap : 0 bytes domainbuilder: detail: domU mmap : 21392 kB domainbuilder: detail: arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xbaa6f domainbuilder: detail: shared_info_x86_64: called domainbuilder: detail: vcpu_x86_64: called domainbuilder: detail: vcpu_x86_64: cr3: pfn 0x24cf mfn 0x37dded domainbuilder: detail: launch_vm: called, ctxt=0x7fff224e4ea0 domainbuilder: detail: xc_dom_release: called Daemon running with PID 4639 [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 3.5.7-gentoo (root@majordomo) (gcc version 4.5.4 (Gentoo 4.5.4 p1.0, pie-0.4.7) ) #1 SMP Tue Nov 20 10:49:51 CET 2012 [ 0.000000] Command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] ACPI in unprivileged domain disabled [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable [ 0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved [ 0.000000] Xen: [mem 0x0000000000100000-0x0000000138ffffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] MPS support code is not built-in. [ 0.000000] Using acpi=off or acpi=noirq or pci=noacpi may have problem [ 0.000000] DMI not present or invalid. [ 0.000000] No AGP bridge found [ 0.000000] e820: last_pfn = 0x139000 max_arch_pfn = 0x400000000 [ 0.000000] e820: last_pfn = 0x100000 max_arch_pfn = 0x400000000 [ 0.000000] init_memory_mapping: [mem 0x00000000-0xffffffff] [ 0.000000] init_memory_mapping: [mem 0x100000000-0x138ffffff] [ 0.000000] NUMA turned off [ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000138ffffff] [ 0.000000] Initmem setup node 0 [mem 0x00000000-0x138ffffff] [ 0.000000] NODE_DATA [mem 0x1387fc000-0x1387fffff] [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00010000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x138ffffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00010000-0x0009ffff] [ 0.000000] node 0: [mem 0x00100000-0x138ffffff] [ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs [ 0.000000] No local APIC present [ 0.000000] APIC: disable apic facility [ 0.000000] APIC: switched to apic NOOP [ 0.000000] e820: cannot find a gap in the 32bit address range [ 0.000000] e820: PCI devices with unassigned 32bit BARs may break! [ 0.000000] e820: [mem 0x139100000-0x1394fffff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on Xen [ 0.000000] Xen version: 4.1.1 (preserve-AD) [ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1 [ 0.000000] PERCPU: Embedded 26 pages/cpu @ffff880138400000 s75712 r8192 d22592 u2097152 [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1259871 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] __ex_table already sorted, skipping sort [ 0.000000] Checking aperture... [ 0.000000] No AGP bridge found [ 0.000000] Memory: 4943980k/5128192k available (3937k kernel code, 448k absent, 183764k reserved, 1951k data, 524k init) [ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] NR_IRQS:4352 nr_irqs:256 16 [ 0.000000] Console: colour dummy device 80x25 [ 0.000000] console [tty0] enabled [ 0.000000] console [hvc0] enabled [ 0.000000] installing Xen timer for CPU 0 [ 0.000000] Detected 3411.602 MHz processor. [ 0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 6823.20 BogoMIPS (lpj=3411602) [ 0.000999] pid_max: default: 32768 minimum: 301 [ 0.000999] Security Framework initialized [ 0.001355] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) [ 0.002974] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.003441] Mount-cache hash table entries: 256 [ 0.003595] Initializing cgroup subsys cpuacct [ 0.003599] Initializing cgroup subsys freezer [ 0.003637] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 0.003637] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8) [ 0.003643] CPU: Physical Processor ID: 0 [ 0.003645] CPU: Processor Core ID: 0 [ 0.003702] SMP alternatives: switching to UP code [ 0.011791] Freeing SMP alternatives: 12k freed [ 0.011835] Performance Events: unsupported p6 CPU model 42 no PMU driver, software events only. [ 0.011886] Brought up 1 CPUs [ 0.011998] Grant tables using version 2 layout. [ 0.012009] Grant table initialized [ 0.012034] NET: Registered protocol family 16 [ 0.012328] PCI: setting up Xen PCI frontend stub [ 0.015089] bio: create slab <bio-0> at 0 [ 0.015158] ACPI: Interpreter disabled. [ 0.015180] xen/balloon: Initialising balloon driver. [ 0.015180] xen-balloon: Initialising balloon driver. [ 0.015180] vgaarb: loaded [ 0.016126] SCSI subsystem initialized [ 0.016314] PCI: System does not support PCI [ 0.016320] PCI: System does not support PCI [ 0.016435] NetLabel: Initializing [ 0.016438] NetLabel: domain hash size = 128 [ 0.016440] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.016447] NetLabel: unlabeled traffic allowed by default [ 0.016475] Switching to clocksource xen [ 0.017434] pnp: PnP ACPI: disabled [ 0.017501] NET: Registered protocol family 2 [ 0.017864] IP route cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.019322] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [ 0.020376] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 0.020497] TCP: Hash tables configured (established 524288 bind 65536) [ 0.020500] TCP: reno registered [ 0.020525] UDP hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020564] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020624] NET: Registered protocol family 1 [ 0.020658] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 0.020662] software IO TLB [mem 0xfb632000-0xff631fff] (64MB) mapped at [ffff8800fb632000-ffff8800ff631fff] [ 0.020750] platform rtc_cmos: registered platform RTC device (no PNP device found) [ 0.021378] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.023378] msgmni has been set to 9656 [ 0.023544] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.023549] io scheduler noop registered [ 0.023551] io scheduler deadline registered [ 0.023580] io scheduler cfq registered (default) [ 0.023650] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.023845] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 0.024082] Non-volatile memory driver v1.3 [ 0.024085] Linux agpgart interface v0.103 [ 0.024207] Event-channel device installed. [ 0.024265] [drm] Initialized drm 1.1.0 20060810 [ 0.024268] [drm:i915_init] *ERROR* drm/i915 can't work without intel_agp module! [ 0.025145] brd: module loaded [ 0.025565] loop: module loaded [ 0.045646] Initialising Xen virtual ethernet driver. [ 0.198264] i8042: PNP: No PS/2 controller found. Probing ports directly. [ 0.199096] i8042: No controller found [ 0.199139] mousedev: PS/2 mouse device common for all mice [ 0.259303] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0 [ 0.259353] rtc_cmos: probe of rtc_cmos failed with error -38 [ 0.259440] md: raid1 personality registered for level 1 [ 0.259542] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 0.259732] ip_tables: (C) 2000-2006 Netfilter Core Team [ 0.259747] TCP: cubic registered [ 0.259886] NET: Registered protocol family 10 [ 0.260031] ip6_tables: (C) 2000-2006 Netfilter Core Team [ 0.260070] sit: IPv6 over IPv4 tunneling driver [ 0.260194] NET: Registered protocol family 17 [ 0.260213] Bridge firewalling registered [ 5.360075] XENBUS: Waiting for devices to initialise: 25s...20s...15s...10s...5s...0s...235s...230s...225s...220s...215s...210s...205s...200s...195s...190s...185s...180s...175s...170s...165s...160s...155s...150s...145s...140s...135s...130s...125s...120s...115s...110s...105s...100s...95s...90s...85s...80s...75s...70s...65s...60s...55s...50s...45s...40s...35s...30s...25s...20s...15s...10s...5s...0s... [ 270.360180] XENBUS: Timeout connecting to device: device/vbd/51713 (local state 3, remote state 1) [ 270.360273] md: Waiting for all devices to be available before autodetect [ 270.360277] md: If you don't use raid, use raid=noautodetect [ 270.360388] md: Autodetecting RAID arrays. [ 270.360392] md: Scanned 0 and added 0 devices. [ 270.360394] md: autorun ... [ 270.360395] md: ... autorun DONE. [ 270.360431] VFS: Cannot open root device "xvda1" or unknown-block(0,0): error -6 [ 270.360435] Please append a correct "root=" boot option; here are the available partitions: [ 270.360440] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 270.360444] Pid: 1, comm: swapper/0 Not tainted 3.5.7-gentoo #1 [ 270.360446] Call Trace: [ 270.360454] [<ffffffff813d2205>] ? panic+0xbe/0x1c5 [ 270.360459] [<ffffffff813d2358>] ? printk+0x4c/0x51 [ 270.360464] [<ffffffff815d5fb7>] ? mount_block_root+0x24f/0x26d [ 270.360469] [<ffffffff815d62b6>] ? prepare_namespace+0x168/0x192 [ 270.360474] [<ffffffff815d5ca7>] ? kernel_init+0x1b0/0x1c2 [ 270.360477] [<ffffffff815d5500>] ? loglevel+0x34/0x34 [ 270.360482] [<ffffffff813d5a64>] ? kernel_thread_helper+0x4/0x10 [ 270.360486] [<ffffffff813d4038>] ? retint_restore_args+0x5/0x6 [ 270.360490] [<ffffffff813d5a60>] ? gs_change+0x13/0x13 The config: name = "dom1" bootloader = "/usr/bin/pygrub" root = "/dev/xvda1 ro" extra = "3" # runlevel memory = 5000 disk = [ 'phy:/dev/md1,xvda1,w' ] # vif = [ 'ip=..., vifname=veth1' ] # none for now Here are some details on the Dom0 kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y # CONFIG_XEN_BLKDEV_FRONTEND is not set CONFIG_XEN_BLKDEV_BACKEND=y # CONFIG_XEN_NETDEV_FRONTEND is not set CONFIG_XEN_NETDEV_BACKEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y CONFIG_XEN_BACKEND=y CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PCIDEV_BACKEND=m CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m And the DomU kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y CONFIG_XEN_BLKDEV_FRONTEND=y CONFIG_XEN_NETDEV_FRONTEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y # CONFIG_XEN_BACKEND is not set CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m Any ideas what I'm doing wrong here? Thanks a lot!

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • 26 Days: Countdown to Oracle OpenWorld 2012

    - by Michael Snow
    Welcome to our countdown to Oracle OpenWorld! Oracle OpenWorld 2012 is just around the corner. In less than 26 days, San Francisco will be invaded by an expected 50,000 people from all over the world. Here on the Oracle WebCenter team, we’ve all been working to help make the experience a great one for all our WebCenter customers. For a sneak peak  – we’ll be spending this week giving you a teaser of what to look forward to if you are joining us in San Francisco from September 30th through October 4th. We have Oracle WebCenter sessions covering all topics imaginable. Take a look and use the tools we provide to build out your schedule in advance and reserve your seats in your favorite sessions.  That gives you plenty of time to plan for your week with us in San Francisco. If unfortunately, your boss denied your request to attend - there are still some ways that you can join in the experience virtually On-Demand. This year - we are expanding even more up North of Market Street and will be taking over Union Square as well. Check out this map of San Francisco to get a sense of how much of a footprint Oracle OpenWorld has grown to this year. With so much to see and so many sessions to learn from - its no wonder that people get excited. Add to that a good mix of fun and all of the possible WebCenter sessions you could attend - you won't want to sleep at all to take full advantage of such an opportunity. We'll also have our annual WebCenter Customer Appreciation reception - stay tuned this week for some more info on registration to make sure you'll be able to join us. If you've been following the America's Cup at all and believe in EXTREME PERFORMANCE you'll definitely want to take a look at this video from last year's OpenWorld Keynote. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Important OpenWorld Links:  Attendee / Presenters Toolkit Oracle Schedule Builder WebCenter Sessions (listed in the catalog under Fusion Middleware as "Portals, Sites, Content, and Collaboration" ) Oracle Music Festival - AMAZING Line up!!  Oracle Customer Appreciation Night -LOOK HERE!! Oracle OpenWorld LIVE On-Demand Here are all the WebCenter sessions broken down by day for your viewing pleasure. Monday, October 1st CON8885 - Simplify CRM Engagement with Contextual Collaboration Are your sales teams disconnected and disengaged? Do you want a tool for easily connecting expertise across your organization and providing visibility into the complete sales process? Do you want a way to enhance and retain organization knowledge? Oracle Social Network is the answer. Attend this session to learn how to make CRM easy, effective, and efficient for use across virtual sales teams. Also learn how Oracle Social Network can drive sales force collaboration with natural conversations throughout the sales cycle, promote sales team productivity through purposeful social networking without the noise, and build cross-team knowledge by integrating conversations with CRM and other business applications. CON8268 - Oracle WebCenter Strategy: Engaging Your Customers. Empowering Your Business Oracle WebCenter is a user engagement platform for social business, connecting people and information. Attend this session to learn about the Oracle WebCenter strategy, and understand where Oracle is taking the platform to help companies engage customers, empower employees, and enable partners. Business success starts with ensuring that everyone is engaged with the right people and the right information and can access what they need through the channel of their choice—Web, mobile, or social. Are you giving customers, employees, and partners the best-possible experience? Come learn how you can! ¶ HOL10208 - Add Social Capabilities to Your Enterprise Applications Oracle Social Network enables you to add real-time collaboration capabilities into your enterprise applications, so that conversations can happen directly within your business systems. In this hands-on lab, you will try out the Oracle Social Network product to collaborate with other attendees, using real-time conversations with document sharing capabilities. Next you will embed social capabilities into a sample Web-based enterprise application, using embedded UI components. Experts will also write simple REST-based integrations, using the Oracle Social Network API to programmatically create social interactions. ¶ CON8893 - Improve Employee Productivity with Intuitive and Social Work Environments Social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. Forward-thinking organizations today need technologies and infrastructures to help them advance to the next level and integrate social activities with business applications to deliver a user experience that simplifies business processes and enterprise application engagement. Attend this session to hear from an innovative Oracle Social Network customer and learn how you can improve productivity with intuitive and social work environments and empower your employees with innovative social tools to enable contextual access to content and dynamic personalization of solutions. ¶ CON8270 - Oracle WebCenter Content Strategy and Vision Oracle WebCenter provides a strategic content infrastructure for managing documents, images, e-mails, and rich media files. With a single repository, organizations can address any content use case, such as accounts payable, HR onboarding, document management, compliance, records management, digital asset management, or Website management. In this session, learn about future plans for how Oracle WebCenter will address new use cases as well as new integrations with Oracle Fusion Middleware and Oracle Applications, leveraging your investments by making your users more productive and error-free. ¶ CON8269 - Oracle WebCenter Sites Strategy and Vision Oracle’s Web experience management solution, Oracle WebCenter Sites, enables organizations to use the online channel to drive customer acquisition and brand loyalty. It helps marketers and business users easily create and manage contextually relevant, social, interactive online experiences across multiple channels on a global scale. In this session, learn about future plans for how Oracle WebCenter Sites will provide you with the tools, capabilities, and integrations you need in order to continue to address your customers’ evolving requirements for engaging online experiences and keep moving your business forward. ¶ CON8896 - Living with SharePoint SharePoint is a popular platform, but it’s not always the best fit for Oracle customers. In this session, you’ll discover the technical and nontechnical limitations and pitfalls of SharePoint and learn about Oracle alternatives for collaboration, portals, enterprise and Web content management, social computing, and application integration. The presentation shows you how to integrate with SharePoint when business or IT requirements dictate and covers cloud-based (Office 365) and on-premises versions of SharePoint. Presented by a former Microsoft director of SharePoint product management and backed by independent customer research, this session will prepare you to answer the question “Why don’t we just use SharePoint for that?’ the next time it comes up in your organization. ¶ CON7843 - Content-Enabling Enterprise Processes with Oracle WebCenter Organizations today continually strive to automate business processes, reduce costs, and improve efficiency. Many business processes are content-intensive and unstructured, requiring ad hoc collaboration, and distributed in nature, requiring many approvals and generating huge volumes of paper. In this session, learn how Oracle and SYSTIME have partnered to help a customer content-enable its enterprise with Oracle WebCenter Content and Oracle WebCenter Imaging 11g and integrate them with Oracle Applications. ¶ CON6114 - Tape Robotics’ Newest Superhero: Now Fueled by Oracle Software For small, midsize, and rapidly growing businesses that want the most energy-efficient, scalable storage infrastructure to meet their rapidly growing data demands, Oracle’s most recent addition to its award-winning tape portfolio leverages several pieces of Oracle software. With Oracle Linux, Oracle WebLogic, and Oracle Fusion Middleware tools, the library achieves a higher level of usability than previous products while offering customers a familiar interface for management, plus ease of use. This session examines the competitive advantages of the tape library and how Oracle software raises customer satisfaction. Learn how the combination of Oracle engineered systems, Oracle Secure Backup, and Oracle’s StorageTek tape libraries provide end-to-end coverage of your data. ¶ CON9437 - Mobile Access Management With more than five billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session focuses on how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON7815 - Customer Experience Online in Cloud: Oracle WebCenter Sites, Oracle ATG Apps, Oracle Exalogic Oracle WebCenter Sites and Oracle’s ATG product line together can provide a compelling marketing and e-commerce experience. When you couple them with the extreme performance of Oracle Exalogic, you’ll see unmatched scalability that provides you with a true cloud-based solution. In this session, you’ll learn how running Oracle WebCenter Sites and ATG applications on Oracle Exalogic delivers both a private and a public cloud experience. Find out what it takes to get these systems working together and delivering engaging Web experiences. Even if you aren’t considering Oracle Exalogic today, the rich Web experience of Oracle WebCenter, paired with the depth of the ATG product line, can provide your business full support, from merchandising through sale completion. ¶ CON8271 - Oracle WebCenter Portal Strategy and Vision To innovate and keep a competitive edge, organizations need to leverage the power of agile and responsive Web applications. Oracle WebCenter Portal enables you to do just that, by delivering intuitive user experiences for enterprise applications to drive innovation with composite applications and mashups. Attend this session to learn firsthand from customers how Oracle WebCenter Portal extends the value of existing enterprise applications, business processes, and content; delivers a superior business user experience; and maximizes limited IT resources. ¶ CON8880 - The Connected Customer Experience Begins with the Online Channel There’s a lot of talk these days about how to connect the customer journey across various touchpoints—from Websites and e-commerce to call centers and in-store—to provide experiences that are more relevant and engaging and ultimately gain competitive edge. Doing it all at once isn’t a realistic objective, so where do you start? Come to this session, and hear about three steps you can take that can help you begin your journey toward delivering the connected customer experience. You’ll hear how Oracle now has an integrated digital marketing platform for your corporate Website, your e-commerce site, your self-service portal, and your marketing and loyalty campaigns, and you’ll learn what you can do today to begin executing on your customer experience initiatives. ¶ GEN11451 - General Session: Building Mobile Applications with Oracle Cloud With the prevalence of smart mobile devices, companies are facing an increased demand to provide access to data and applications from new channels. However, developing applications for mobile devices poses some unique challenges. Come to this session to learn how Oracle addresses these challenges, offering a simpler way to develop and deploy cross-device mobile applications. See how Oracle Cloud enables you to access applications, data, and services from mobile channels in an easier way.  CON8272 - Oracle Social Network Strategy and Vision One key way of increasing employee productivity is by bringing people, processes, and information together—providing new social capabilities to enable business users to quickly correspond and collaborate on business activities. Oracle WebCenter provides a user engagement platform with social and collaborative technologies to empower business users to focus on their key business processes, applications, and content in the context of their role and process. Attend this session to hear how the latest social capabilities in Oracle Social Network are enabling organizations to transform themselves into social businesses.  --- Tuesday, October 2nd HOL10194 - Enterprise Content Management Simplified: Oracle WebCenter Content’s Next-Generation UI Regardless of the nature of your business, unstructured content underpins many of its daily functions. Whether you are working with traditional presentations, spreadsheets, or text documents—or even with digital assets such as images and multimedia files—your content needs to be accessible and manageable in convenient and intuitive ways to make working with the content easier. Additionally, you need the ability to easily share documents with coworkers to facilitate a collaborative working environment. Come to this session to see how Oracle WebCenter Content’s next-generation user interface helps modern knowledge workers easily manage personal and enterprise documents in a collaborative environment.¶ CON8877 - Develop a Mobile Strategy with Oracle WebCenter: Engage Customers, Employees, and Partners Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables users to have information available at their fingertips, enabling them to take action the moment they make a decision, interact in the moment of convenience, and take advantage of new service offerings in their preferred channels. All your employees have your mobile applications in their pocket; now what are you going to do? It is a critical step for companies to think through what their employees, customers, and partners really need on their devices. Attend this session to see how Oracle WebCenter enables you to better engage your customers, employees, and partners by providing a unified experience across multiple channels. ¶ CON9447 - Enabling Access for Hundreds of Millions of Users How do you grow your business by identifying, authenticating, authorizing, and federating users on the Web, leveraging social identity and the open source OAuth protocol? How do you scale your access management solution to support hundreds of millions of users? With social identity support out of the box, Oracle’s access management solution is also benchmarked for 250-million-user deployment according to real-world customer scenarios. In this session, you will learn about the social identity capability and the 250-million-user benchmark testing of Oracle Access Manager and Oracle Adaptive Access Manager running on Oracle Exalogic and Oracle Exadata. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON2906 - Get Proactive: Best Practices for Maintaining Oracle Fusion Middleware You chose Oracle Fusion Middleware products to help your organization deliver superior business results. Now learn how to take full advantage of your software with all the great tools, resources, and product updates you’re entitled to through Oracle Support. In this session, Oracle product experts provide proven best practices to help you work more efficiently, plan and prepare for upgrades and patching more effectively, and manage risk. Topics include configuration management tools, remote diagnostics, My Oracle Support Community, and My Oracle Support Lifecycle Advisors. New users and Oracle Fusion Middleware experts alike are guaranteed to leave with fresh ideas and practical, easy-to-implement next steps. ¶ CON8878 - Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mashups Cloud computing represents a paradigm shift in how we build applications, automate processes, collaborate, and share and in how we secure our enterprise. Additionally, as you adopt cloud-based services in your organization, it’s likely that you will still have many critical on-premises applications running. With these mixed environments, multiple user interfaces, different security, and multiple datasources and content sources, how do you start evolving your strategy to account for these challenges? Oracle WebCenter offers a complete array of technologies enabling you to solve these challenges and prepare you for the cloud. Attend this session to learn how you can use Oracle WebCenter in the cloud as well as create on-premises and cloud application mash-ups. ¶ CON8901 - Optimize Enterprise Business Processes with Oracle WebCenter and Oracle BPM Do you have business processes that span multiple applications? Are you grappling with how to have visibility across these business processes; how to manage content that is associated with these processes; and, most importantly, how to model and optimize these business processes? Attend this session to hear how Oracle WebCenter and Oracle Business Process Management provide a unique set of integrated solutions to provide a composite application dashboard across these business processes and offer a solution for content-centric business processes. ¶ CON8883 - Deliver Engaging Interfaces to Oracle Applications with Oracle WebCenter Critical business processes live within enterprise applications, and application users need to manage and execute these processes as effectively as possible. Oracle provides a comprehensive user engagement platform to increase user productivity and optimize overall processes within Oracle Applications—Oracle E-Business Suite and Oracle’s Siebel, PeopleSoft, and JD Edwards product families—and third-party applications. Attend this session to learn how you can integrate these applications with Oracle WebCenter to deliver composite application dashboards to your end users—whether they are your customers, partners, or employees—for enhanced usability and Web 2.0–enabled enterprise portals.¶ Wednesday, October 3rd CON8895 - Future-Ready Intranets: How Aramark Re-engineered the Application Landscape There are essential techniques and technologies you can use to deliver employee portals that garner higher productivity, improve business efficiency, and increase user engagement. Attend this session to learn how you can leverage Oracle WebCenter Portal as a user engagement platform for bringing together business process management, enterprise content management, and business intelligence into a highly relevant and integrated experience. Hear how Aramark has leveraged Oracle WebCenter Portal and Oracle WebCenter Content to deliver a unified workspace providing simpler navigation and processing, consolidation of tools, easy access to information, integrated search, and single sign-on. ¶ CON8886 - Content Consolidation: Save Money, Increase Efficiency, and Eliminate Silos Organizations are looking for ways to save money and be more efficient. With content in many different places, it’s difficult to know where to look for a document and whether the document is the most current version. With Oracle WebCenter, content can be consolidated into one best-of-breed repository that is secure, scalable, and integrated with your business processes and applications. Users can find the content they need, where they need it, and ensure that it is the right content. This session covers content challenges that affect your business; content consolidation that can lead to savings in storage and administration costs and can lower risks; and how companies are realizing savings. ¶ CON8911 - Improve Online Experiences for Customers and Partners with Self-Service Portals Are you able to provide your customers and partners an easy-to-use online self-service experience? Are you processing high-volume transactions and struggling with call center bottlenecks or back-end systems that won’t integrate, causing order delays and customer frustration? Are you looking to target content such as product and service offerings to your end users? This session shares approaches to providing targeted delivery as well as strategies and best practices for transforming your business by providing an intuitive user experience for your customers and partners. ¶ CON6156 - Top 10 Ways to Integrate Oracle WebCenter Content This session covers 10 common ways to integrate Oracle WebCenter Content with other enterprise applications and middleware. It discusses out-of-the-box modules that provide expanded features in Oracle WebCenter Content—such as enterprise search, SOA, and BPEL—as well as developer tools you can use to create custom integrations. The presentation also gives guidance on which integration option may work best in your environment. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON7817 - Migration to Oracle WebCenter Imaging 11g Customers today continually strive to automate business processes, reduce costs, and improve efficiency. The accounts payable process—which is often distributed in nature, requires many approvals, and generates huge volumes of paper invoices—is automated by many customers. In this session, learn how Oracle and SYSTIME have partnered to help a customer migrate its existing Oracle Imaging and Process Management Release 7.6 to the latest Oracle WebCenter Imaging 11g and integrate it with Oracle’s JD Edwards family of products. ¶ CON8910 - How to Engage Customers Across Web, Mobile, and Social Channels Whether on desktops at the office, on tablets at home, or on mobile phones when on the go, today’s customers are always connected. To engage today’s customers, you need to make the online customer experience connected and consistent across a host of devices and multiple channels, including Web, mobile, and social networks. Managing this multichannel environment can result in lots of headaches without the right tools. Attend this session to learn how Oracle WebCenter Sites solves the challenge of multichannel customer engagement. ¶ HOL10206 - Oracle WebCenter Sites 11g: Transforming the Content Contributor Experience Oracle WebCenter Sites 11g makes it easy for marketers and business users to contribute to and manage Websites with the new visual, contextual, and intuitive Web authoring interface. In this hands-on lab, you will create and manage content for a sports-themed Website, using many of the new and enhanced features of the 11g release. ¶ CON8900 - Building Next-Generation Portals: An Interactive Customer Panel Discussion Social and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. ¶ CON9625 - Taking Control of Oracle WebCenter Security Organizations are increasingly looking to extend their Oracle WebCenter portal for social business, to serve external users and provide seamless access to the right information. In particular, many organizations are extending Oracle WebCenter in a business-to-business scenario requiring secure identification and authorization of business partners and their users. This session focuses on how customers are leveraging, securing, and providing access control to Oracle WebCenter portal and mobile solutions. You will learn best practices and hear real-world examples of how to provide flexible and granular access control for Oracle WebCenter deployments, using Oracle Platform Security Services and Oracle Access Management Suite product offerings. ¶ CON8891 - Extending Social into Enterprise Applications and Business Processes Oracle Social Network is an extensible social platform that enables contextual collaboration within enterprise applications and business processes, providing relevant data from across various enterprise systems in one place. Attend this session to see how an Oracle Social Network customer is integrating multiple applications—such as CRM, HCM, and business processes—into Oracle Social Network and Oracle WebCenter to enable individuals and teams to solve complex cross-organizational business problems more effectively by utilizing the social enterprise. ¶ Thursday, October 4th CON8899 - Becoming a Social Business: Stories from the Front Lines of Change What does it really mean to be a social business? How can you change our organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. It takes a thought-provoking look at becoming a social business from the inside. ¶ CON6851 - Oracle WebCenter and Oracle Business Intelligence Enterprise Edition to Create Vendor Portals Large manufacturers of grocery items routinely find themselves depending on the inventory management expertise of their wholesalers and distributors. Inventory costs can be managed more efficiently by the manufacturers if they have better insight into the inventory levels of items carried by their distributors. This creates a unique opportunity for distributors and wholesalers to leverage this knowledge into a revenue-generating subscription service. Oracle Business Intelligence Enterprise Edition and Oracle WebCenter Portal play a key part in enabling creation of business-managed business intelligence portals for vendors. This session discusses one customer that implemented this by leveraging Oracle WebCenter and Oracle Business Intelligence Enterprise Edition. ¶ CON8879 - Provide a Personalized and Consistent Customer Experience in Your Websites and Portals Your customers engage with your company online in different ways throughout their journey—from prospecting by acquiring information on your corporate Website to transacting through self-service applications on your customer portal—and then the cycle begins again when they look for new products and services. Ensuring that the customer experience is consistent and personalized across online properties—from branding and content to interactions and transactions—can be a daunting task. Oracle WebCenter enables you to speak and interact with your customers with one voice across your Websites and portals by providing an integrated platform for delivery of self-service and engagement that unifies and personalizes the online experience. Learn more in this session. ¶ CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM Nirvana Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we’re not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. ¶ CON8908 - Oracle WebCenter Portal: Creating and Using Content Presenter Templates Oracle WebCenter Portal applications use task flows to display and integrate content stored in the Oracle WebCenter Content server. Among the most flexible task flows is Content Presenter, which renders various types of content on an Oracle WebCenter Portal page. Although Oracle WebCenter Portal comes with a set of predefined Content Presenter templates, developers can create their own templates for specific rendering needs. This session shows the lifecycle of developing Content Presenter task flows, including how to create, package, import, modify at runtime, and use such templates. In addition to simple examples with Oracle Application Development Framework (Oracle ADF) UI elements to render the content, it shows how to use other UI technologies, CSS files, and JavaScript libraries. ¶ CON8897 - Using Web Experience Management to Drive Online Marketing Success Every year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from online marketers how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. ¶ CON8892 - Oracle’s Journey to Social Business Social business is a revolution, one that is causing rapidly accelerating change in how companies and customers engage with one another and how employees work together. Oracle’s goal in becoming a social business is to create a socially connected organization in which working collaboratively across geographical locations, lines of business, and management chains is second nature, enabling innovative solutions to business challenges. We can achieve this by connecting the right people, finding the right content, communicating with the right people, collaborating at the right time, and building the right communities in the right context—all ready in the CLOUD. Attend this session to see how Oracle is transforming itself into a social business. ¶  ------------ If you've read all the way to the end here - we are REALLY looking forward to seeing you in San Francisco.

    Read the article

< Previous Page | 175 176 177 178 179 180  | Next Page >