Search Results

Search found 7110 results on 285 pages for 'community faq proposed'.

Page 265/285 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • Is your dream an international experience?

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Studying in Poland, having two summer jobs in England, doing one internship in India, working in Thailand for half a year and now working in Prague. Does it seem an adventure? Well it is and I will tell you how I came to have this international experience. Dzien Dobry! My name is Wojciech Jurojc, I am Polish and I am currently a Business Development Consultant within Oracle, based in Prague. I joined Oracle on the 1st of August 2011. I graduated in 2010 and obtained 2 Masters Degrees in Political Science and Economics. I would like to tell you more about my past and how I joined Oracle. In 2005 I began studying at the Faculty of Political Sciences Gdansk University. In 2008, I obtained a Bachelors Degree. During these three years I had the opportunity to go to England twice, where I worked as a Bartender, first in Blackpool and then in Manchester. This allowed me to improve my language skills and become more confident. In the meantime, I joined the International Student Organization-AIESEC, where I was organized conferences and conducted student projects. Also I met a mass of interesting people from around the world. After graduation in 2008, I was able to get an Internship within a big company in Poland. I worked there as an Intern in the Purchase Department. That was my first adventure within a corporate environment. I learnt a lot about purchasing processes and negotiations. In September 2008, I started studying two Masters Faculties: Political Science and Economics. It was very difficult, but it was not impossible. Over the next two years of studying I was able to go on a three month internship to India where I worked as a Marketing Assistant in an NGO. I was travelling around northern India and did presentations to the academic community about green energy and environmental projects. I had the opportunity to visit Nepal and walked in the Himalayas. That was a huge experience as well as a cultural shock. It taught me how to deal with many problems and to appreciate what I have. At the end of 2009 I was working as a Marketing Assistant for a Leasing company, where I learnt useful sales knowledge and improved my objection handling skills. In July 2010, I graduated with a double Masters and found a job in Thailand as Sales Representative in an IT company. I worked in Thailand until the end of January 2011. Besides that, I was working in an International company with interesting people and I had the opportunity to travel around Thailand and visit Cambodia. After this adventure I started looking for jobs in Europe where I could further develop my sales skills. I found Oracle and I don’t regret this decision which I made. I am currently working in Prague in an international Hardware team and I know that is not the end of my adventures. At this moment, I am working in a team of 12 members. Ten of them are based in Prague and 2 others are based in Russia. We come from different countries such as: Czech Republic, Russia, Ukraine, Turkey, Slovakia and Kazakhstan. I am working on the Polish market, cooperating with our Hardware customers and partners. What do I enjoy the most about my job? I enjoy every challenge that I face in my daily activities as there are always new experiences for me and new things that I learn. As part of Oracle, I gain international exposure and therefore more career opportunities to explore. I have planned my next step for the career path I dream of and I am currently working on it. I recommend you check our Career Page if you’re looking for an international career. If you want to find out more about our job opportunities, follow us on https://campus.oracle.com .

    Read the article

  • What is Happening vs. What is Interesting

    - by Geertjan
    Devoxx 2011 was yet another confirmation that all development everywhere is either on the web or on mobile phones. Whether you looked at the conference schedule or attended sessions or talked to speakers at any point at all, it was very clear that no development whatsoever is done anymore on the desktop. In fact, that's something Tim Bray himself told me to my face at the speakers dinner. No new developments of any kind are happening on the desktop. Everyone who is currently on the desktop is working overtime to move all of their applications to the web. They're probably also creating a small subset of their application on an Android tablet, with an even smaller subset on their Android phone. Then you scratch that monolithic surface and find some interesting results. Without naming any names, I asked one of these prominent "ah, forget about the desktop" people at the Devoxx speakers dinner (and I have a witness): "Yes, the desktop is dead, but what about air traffic control, stock trading, oil analysis, risk management applications? In fact, what about any back office application that needs to be usable across all operating systems? Here there is no concern whatsoever with 100% accessibility which is, after all, the only thing that the web has over the desktop, (except when there's a network failure, of course, or when you find yourself in the 3/4 of the world where there's bandwidth problems)? There are 1000's of hidden applications out there that have processing requirements, security requirements, and the requirement that they'll be available even when the network is down or even completely unavailable. Isn't that a valid use case and aren't there 1000's of applications that fall into this so-called niche category? Are you not, in fact, confusing consumer applications, which are increasingly web-based and mobile-based, with high-end corporate applications, which typically need to do massive processing, of one kind or another, for which the web and mobile worlds are completely unsuited?" And you will not believe what the reply to the above question was. (Again, I have a witness to this discussion.) But here it is: "Yes. But those applications are not interesting. I do not want to spend any of my time or work in any way on those applications. They are boring." I'm sad to say that the leaders of the software development community, including those in the Java world, either share the above opinion or are led by it. Because they find something that is not new to be boring, they move on to what is interesting and start talking like the supposedly-boring developments don't even exist. (Kind of like a rapper pretending classical music doesn't exist.) Time and time again I find myself giving Java desktop development courses (at companies, i.e., not hobbyists, or students, but companies, i.e., the places where dollars are earned), where developers say to me: "The course you're giving about creating cross-platform, loosely coupled, and highly cohesive applications is really useful to us. Why do we never find information about this topic at conferences? Why can we never attend a session at a conference where the story about pluggable cross-platform Java is told? Why do we get the impression that we are uncool because we're not on the web and because we're not on a mobile phone, while the reason for that is because we're creating $1000,000 simulation software which has nothing to gain from being on the web or on the mobile phone?" And then I say: "Because nobody knows you exist. Because you're not submitting abstracts to conferences about your very interesting use cases. And because conferences tend to focus on what is new, which tends to be web related (especially HTML 5) or mobile related (especially Android). Because you're not taking the responsibility on yourself to tell the real stories about the real applications being developed all the time and every day. Because you yourself think your work is boring, while in fact it is fascinating. Because desktop developers are working from 9 to 5 on the desktop, in secure environments, such as banks and defense, where you can't spend time, nor have the interest in, blogging your latest tip or trick, as opposed to web developers, who tend to spend a lot of time on the web anyway and are therefore much more inclined to create buzz about the kind of work they're doing." So, next time you look at a conference program and wonder why there's no stories about large desktop development projects in the program, here's the short answer: "No one is going to put those items on the program until you start submitting those kinds of sessions. And until you start blogging. Until you start creating the buzz that the web developers have been creating around their work for the past 10 years or so. And, yes, indeed, programmers get the conference they deserve." And what about Tim Bray? Ask yourself, as Google's lead web technology evangelist, how many desktop developers do you think he talks to and, more generally, what his frame of reference is and what, clearly, he considers to be most interesting.

    Read the article

  • Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

    - by Ted McLaughlan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} All too often in the growth and maturation of Enterprise Architecture initiatives, the effort stalls or is delayed due to lack of “applied traction”. By this, I mean the EA activities - whether targeted towards compliance, risk mitigation or value opportunity propositions – may not be attached to measurable, active, visible projects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum, without collaborative engagement and a means of proving usefulness. A critical vehicle to this proof is successful orchestration and use of assets and investment resources to meet a high-profile business objective – i.e. a successful project. More and more organizations are now exploring and considering some degree of IT outsourcing, buying and using external services and solutions to deliver their IT and business requirements – vs. building and operating in-house, in their own data centers. The rapid growth and success of “Cloud” services makes some decisions easier and some IT projects more successful, while dramatically lowering IT risks and enabling rapid growth. This is particularly true for “Software as a Service” (SaaS) applications, which essentially are complete web applications hosted and delivered over the Internet. Whether SaaS solutions – or any kind of cloud solution - are actually, ultimately the most cost-effective approach truly depends on the organization’s business and IT investment strategy. This leads us to Enterprise Architecture, the connectivity between business strategy and investment objectives, and the capabilities purchased or created to meet them. If an EA framework already exists, the approach to selecting a cloud-based solution and integrating it with internal IT systems (i.e. a “Hybrid IT” solution) is well-served by leveraging EA methods. If an EA framework doesn’t exist, or is simply not mature enough to address complex, integrated IT objectives – a hybrid IT/cloud initiative is the perfect project to advance and prove the value of EA. Why is this? For starters, the success of any complex IT integration project - spanning multiple systems, contracts and organizations, public and private – depends on active collaboration and coordination among the project stakeholders. For a hybrid IT initiative, inclusive of one or more cloud services providers, the IT services, business workflow and data governance challenges alone can be extremely complex, requiring many diverse layers of organizational expertise and authority. Establishing subject matter expertise, authorities and strategic guidance across all the disciplines involved in a hybrid-IT or hybrid-cloud system requires top-level, comprehensive experience and collaborative leadership. Tools and practices reflecting industry expertise and EA alignment can also be very helpful – such as Oracle’s “Cloud Candidate Selection Tool”. Using tools like this, and facilitating this critical collaboration by leading, organizing and coordinating the input and expertise into a shared, referenceable, reusable set of authority models and practices – this is where EA shines, and where Enterprise Architects can be most valuable. The “enterprise”, in this case, becomes something greater than the core organization – it includes internal systems, public cloud services, 3rd-party IT platforms and datacenters, distributed users and devices; a whole greater than the sum of its parts. Through facilitated project collaboration, leading to identification or creation of solid governance models and processes, a durable and useful Enterprise Architecture framework will usually emerge by itself, if not actually identified and managed as such. The transition from planning collaboration to actual coordination, where the program plan, schedule and resources become synchronized and aligned to other investments in the organization portfolio, is where EA methods and artifacts appear and become most useful. The actual scope and use of these artifacts, in the context of this project, can then set the stage for the most desirable, helpful and pragmatic form of the now-maturing EA framework and community of practice. Considering or starting a hybrid-IT or hybrid-cloud initiative? Running into some complex relationship challenges? This is the perfect time to take advantage of your new, growing or possibly latent Enterprise Architecture practice.

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • PeopleSoft @ RECONNECT 14

    - by Marc Weintraub
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Quest’s RECONNECT 14 is just around the corner and will be here before you know it. RECONNECT 14 is Tuesday, July 22 – Thursday, July 24 at the Hyatt Regency O’Hare in Rosemont, IL. Quest’s RECONNECT event is a PeopleSoft-specific deep dive conference for the Quest community. Join Quest and hundreds of other PeopleSoft users for deep-dive education into all things PeopleSoft; from HCM and Financials to Applications Tools and Technology (i.e. PeopleTools) and Procurement (i.e. Supplier Relationship Management). RECONNECT also includes industry specific interest areas like those for Financial Services and Manufacturing and Distribution. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This year's event will feature many key players from Oracle’s PeopleSoft team including PeopleSoft Product Strategy leads and PeopleSoft Development leads. Nearly 50 of the more than 175 conference sessions will be led by members of Oracle including pillar-specific roadmap presentations. Create a custom agenda that fits your specific needs and interests. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The RECONNECT Advance Program is now available and includes: Who Should Attend? Keynotes and Super Sessions Full Listing of Conference Sessions Ways to Influence Future PeopleSoft Investments Trainings and Continuing Professional Education (CPE) Offerings Onsite User Group Meetings Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Don’t wait another moment, register now. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Revisiting the Generations

    - by Row Henson
    I was asked earlier this year to contribute an article to the IHRIM publication – Workforce Solutions Review.  My topic focused on the reality of the Gen Y population 10 years after their entry into the workforce.  Below is an excerpt from that article: It seems like yesterday that we were all talking about the entry of the Gen Y'ers into the workforce and what a radical change that would have on how we attract, retain, motivate, reward, and engage this new, younger segment of the workforce.  We all heard and read that these youngsters would be more entrepreneurial than their predecessors – the Gen X'ers – who were said to be more loyal to their profession than their employer. And, we heard that these “youngsters” would certainly be far less loyal to their employers than the Baby Boomers or even earlier Traditionalists. It was also predicted that – at least for the developed parts of the world – they would be more interested in work/life balance than financial reward; they would need constant and immediate reinforcement and recognition and we would be lucky to have them in our employment for two to three years. And, to keep them longer than that we would need to promote them often so they would be continuously learning since their long-term (10-year) goal would be to own their own business or be an independent consultant.  Well, it occurred to me recently that the first of the Gen Y'ers are now in their early 30s and it is time to look back on some of these predictions. Many really believed the Gen Y'ers would enter the workforce with an attitude – expect everything to be easy for them – have their employers meet their demands or move to the next employer, and I believe that we can now say that, generally, has not been the case. Speaking from personal experience, I have mentored a number of Gen Y'ers and initially felt that with a 40-year career in Human Resources and Human Resources Technology – I could share a lot with them. I found out very quickly that I was learning at least as much from them! Some of the amazing attributes I found from these under-30s was their fearlessness, ease of which they were able to multi-task, amazing energy and great technical savvy. They were very comfortable with collaborating with colleagues from both inside the company and peers outside their organization to problem-solve quickly. Most were eager to learn and willing to work hard.  This brings me to the generation that will follow the Gen Y'ers – the Generation Z'ers – those born after 1998. We have come full circle. If we look at the Silent Generation or Traditionalists, we find a workforce that preceded the television and even very early telephones. We Baby Boomers (as I fall right squarely in this category) remembered the invention of the television and telephone – but laptop computers and personal digital assistants (PDAs) were a thing of “StarTrek” and other science fiction movies and publications. Certainly, the Gen X'ers and Gen Y'ers grew up with the comfort of these devices just as we did with calculators. But, what of those under the age of 10 – how will the workplace look in 15 more years and what type of workforce will be required to operate in the mobile, global, virtual world. I spoke to a friend recently who had her four-year-old granddaughter for a visit. She said she found her in the den in front of the TV trying to use her hand to get the screen to move! So, you see – we have come full circle. The under-70 Traditionalist grew up in a world without TV and the Generation Z'er may never remember the TV we knew just a few years ago. As with every generation – we spend much time generalizing on their characteristics. The most important thing to remember is every generation – just like every individual – is different. The important thing for those of us in Human Resources to remember is that one size doesn’t fit all. What motivates one employee to come to work for you and stay there and be productive is very different than what the next employee is looking for and the organization that can provide this fluidity and flexibility will be the survivor for generations to come. And, finally, just when we think we have it figured out, a multitude of external factors such as the economy, world politics, industries, and technologies we haven’t even thought about will come along and change those predictions. As I reach retirement age – I do so believing that our organizations are in good hands with the generations to follow – energetic, collaborative and capable of working hard while still understanding the need for balance at work, at home and in the community! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • If I were in a Silverlight focus group, here is ten things I would say.

    - by mbcrump
    Silverlight is a great product right off the shelf. I use it, love it and spend a lot of time helping the community understand it. This however, doesn’t mean that I don’t think that it can get better. If I were invited to a Microsoft Focus Group about Silverlight here is 10 things I would say:  We need more navigation templates. I’ve found (4) templates that Microsoft has released (Cosmo, Windows 7, Accent and JetPack). This number needs to be around 16. In order to get more people developing for Silverlight, we need to give them a variety of templates to get them off the ground quickly. Silverlight needs to ship with the next version of Windows. At least version 4 needs to be pre-installed on Windows going forward. It’s small, in its own sandbox and I cannot find a reason for it not to be included. Silverlight needs to run on more platforms.  iOS and Android are the key here. I think Microsoft should shoot for Android first since I believe Android will take the lead in the mobile market (at least for the short-term). It would also be great to see Microsoft use Silverlight as the focus on their new tablets / “AppleTV”. I would even invest in getting it working with Kinect. When creating a new project in Silverlight, we should have the option to create a Unit Test. Most Silverlight developers are not unit testing. If this is surprising to you then you need to get out and talk to more developers. I partially blame this on Microsoft. When you create a new ASP.NET MVC application, you simply put a check to create a Unit Test project. We need the same thing for Silverlight. We should steer the developer into the right direction. Design patterns such as MVVM need to be easier to implement in Silverlight solutions.  I’d go so far as to say that MVVM Light should ship with Visual Studio. With the project / item templates and code snippets, Laurent puts you into the right direction. This is the way that it should have been. Easy for the 9-5 developer to grasp. I believe the majority of developers use code behind because that’s what is in all the demos provided by Microsoft. They are not trying to write sucky code it is that they simply don’t know a better way.  The XAP Files should be obfuscated/unused references deleted by default when in “Release” mode. A better Silverlight experience starts with a smaller XAP file. The less that a user has to download is the better, even with the majority of people on broadband. I would also recommend built-in obfuscation by Microsoft. People are paranoid that they can rename the .zip and run it through reflector. Get rid of the boring install experiences. Here is a great write up on what I’m talking about. The default “Install Silverlight” and “Loading screens” suck. They suck bad. We need a choice of templates that a professional designer has created.  Silverlight needs to supports more image formats. For example: it would be great to use .gif’s without converting them to .png.    Switching between Blend 4 and VS2010 to develop a Silverlight application is a pain. Probably one of the biggest issues that I can’t think of a good solution for. It would be nice if VS2012 had the best of both worlds and you never have to leave VS. We need reporting controls with SSRS included with the Silverlight Toolkit. I can’t think of another control that we need built into the toolkit. It would also be helpful to have export to .xls, .pdf and .doc included with the control. I hope that this post will at least get a few people talking. Who knows, Microsoft could be working on these things right now. Thanks for reading!  Subscribe to my feed CodeProject

    Read the article

  • YouTube SEO: Video Optimization

    - by Mike Stiles
    SEO optimization is still regarded as one of the primary tools in the digital marketing kit. However and wherever a potential customer is conducting a search, brands want their content to surface in the top results. Makes sense. But without a regular flow of good, relevant content, your SEO opportunities run shallow. We know from several studies video is one of the most engaging forms of content, so why not make sure that in addition to being cool, your videos are helping you win the SEO game? Keywords:-Decide what search phrases make the most sense for your video. Don’t dare use phrases that have nothing to do with the content. You’ll make people mad.-Research those keywords to see how competitive they are. Adjust them so there are still lots of people searching for it, but there are not as many links showing up for it.-Search your potential keywords and phrases to see what comes up. It’s amazing how many people forget to do that. Video Title: -Try to start and/or end with your keyword.-When you search on YouTube, visual action words tend to come up as suggested searches. So try to use action words. Video Description: -Lead with a link to your site (include http://). -Don’t stuff this with your keyword. It leads to bad writing and it won’t work anyway. This is where you convince people to watch, so write for humans. Use some showmanship. -At the end, do a call to action (subscribe, see the whole playlist, visit our social channels, etc.) Video Tags:-Don’t over-tag. 5-10 tags per video is plenty. -If you’re compelled to have more than 10, that means you should probably make more videos specifically targeting all those keywords. Find Linking Pals:-45% of videos are discovered on video sites. But 44% are found through links on blogs and sites.-Write a blog about your video’s content, then link to the video in it. -A good site for finding places to guest blog is myblogguest.com-Once you find good linking partners, they’ll link to your future videos (as long as they’re good and you’re returning the favor). Tap the Power of Similar Videos:-Use Video Reply to associate your video with other topic-related videos. That’s when you make a video responding to or referencing a video made by someone else. Content:-Again, build up a portfolio of videos, not just one that goes after 30 keywords.-Create shorter, sequential videos that pull them deeper into the content and closer to a desired final action.-Organize your video topics separately using Playlists. Playlists show up as a whole in search results like individual videos, so optimize playlists the same as you would for a video. Meta Data:-Too much importance is placed on it. It accounts for only 15% of search success.-YouTube reads Captions or Transcripts to determine what a video is about. If you’re not using them, you’re missing out.-You get the SEO benefit of captions and transcripts whether the viewers has them toggled on or not. Promotion:-This accounts for 25% of search success.-Promote the daylights out of your videos using your social channels and digital assets. Don’t assume it’s going to magically get discovered. -You can pay to promote your video. This could surface it on the YouTube home page, YouTube search results, YouTube related videos, and across the Google content network. Community:-Accounts for 10% of search success.-Make sure your YouTube home page is a fun place to spend time. Carefully pick your featured video, and make sure your Playlists are featured. -Participate in discussions so users will see you’re present. The volume of ratings/comments is as important as the number of views when it comes to where you surface on search. Video Sitemaps:-As with a web site, a video sitemap helps Google quickly index your video.-Google wants to know title, description, play page URL, the URL of the thumbnail image you want, and raw video file location.-Sitemaps are xml files you host or dynamically generate on your site. Once you’ve made your sitemap, sign in and submit it using Google webmaster tools. Just as with the broadcast and cable TV channels, putting a video out there is only step one. You also have to make sure everybody knows it’s there so the largest audience possible can see it. Here’s hoping you get great ratings. @mikestiles

    Read the article

  • What's New in Oracle VM VirtualBox 4.2?

    - by Fat Bloke
    A year is a long time in the IT industry. Since the last VirtualBox feature release, which was a little over a year ago, we've seen: new releases of cool new operating systems, such as Windows 8, ChromeOS, and Mountain Lion; we've seen a myriad of new Linux releases from big Enterprise class distributions like Oracle 6.3, to accessible desktop distros like Ubuntu 12.04 and Fedora 17; and we've also seen the spec of a typical PC or laptop double in power. All of these events have influenced our new VirtualBox version which we're releasing today. Here's how... Powerful hosts  One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's. Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends.  So we're pleased to unveil a more powerful VirtualBox Manager to address the needs of these users: VM Groups Groups allow you to organize your VM library in a sensible way, e.g.  by platform type, by project, by version, by whatever. To create groups you can drag one VM onto another or select one or more VM's and choose Machine...Group from the menu bar. You can expand and collapse groups to save screen real estate, and you can Enter and Leave a group (think iPad navigation here) by using the right and left arrow keys when groups are selected. But groups are more than passive folders, because you can now also perform operations on groups, rather than all the individual VMs. So if you have a multi-tiered solution you can start the whole stack up with just one click. Autostart Many VirtualBox users run dedicated services in their VMs, for example, running a Wiki. With these types of VM workloads, you really want the VM start up when the host machine boots up. So with 4.2 we've introduced a cross-platform Auto-start mechanism to allow you to treat VMs as host services. Headless VM Launching With VM's such as web servers, wikis, and other types of server-class workloads, the Console of the VM is pretty much redundant. For some time now VirtualBox has offered a separate launch mechanism for these VM's, namely the command-line interface commands VBoxHeadless or VBoxManage startvm ... --type headless commands. But with 4.2 we also allow you launch headless VMs from the Manager. Simply hold down Shift when launching the VM from the Manager.  It's that easy. But how do you stop a headless VM? Well, with 4.2 we allow you to Close the VM from the Manager. (BTW best to use the ACPI Shutdown method which allows the guest VM to close down gracefully.) Easy VM Creation For our expert users, the  New VM Wizard was a little tiresome, so now there's a faster 2-click VM creation mode. Just Hide the description when creating a new VM. Powerful VMs  As the hosts have become more powerful, so are the guests that are running inside them. Here are some of the 4.2 features to accommodate them: Virtual Network Interface Cards  With 4.2, it's now possible to create VMs with up to 36 NICs, when using the ICH9 chipset emulation. But with great power comes great responsibility (didn't Obi-Wan say something similar?), and so we have also introduced bandwidth limiting to prevent a rogue VM stealing the whole pipe. VLAN tagging Some of our users leverage VLANs extensively so we've enhanced the E1000 NICs to support this.  Processor Performance If you are running a CPU which supports Nested Paging (aka EPT in the Intel world) such as most of the Core i5 and i7 CPUs, or are running an AMD Bulldozer or later, you should see some performance improvements from our work with these processors. And while we're talking Processors, we've added support for some of the more modern VIA CPUs too. Powerful Automation Because VirtualBox runs atop a fully blown operating system, it makes sense to leverage the capabilities of the host to run scripts that can drive the guest VMs. Guest Automation was introduced in a prior release but with 4.2 we've revamped the APIs to allow a richer and more powerful set of operations to be executed by the guest. Check out the IGuest APIs in the VirtualBox Programming Guide and Reference (SDK). Powerful Platforms  All the hardcore engineering that has gone into 4.2 has been done for a purpose and that is to deliver a fast and powerful engine that can run almost any x86 OS because of the integrity of the virtualization. So we're pleased to add support for these platforms: Mac OS X "Mountain Lion"  Windows 8 Windows Server 2012 Ubuntu 12.04 (“Precise Pangolin”) Fedora 17 Oracle Linux 6.3  Here's the proof: We don't have time to go into the myriad of smaller improvements such as support for burning audio CDs from a guest, bi-directional clipboard control,  drag-and-drop of files into Linux guests, etc. so we'll leave that as an exercise for the user as soon as you've downloaded from the Oracle or community site and taken a peek at the User Guide. So all in all, a pretty solid release, one that we hope you'll enjoy discovering. - FB 

    Read the article

  • Java?????????????????????(Java??????????????????)

    - by OTN-J Master
    8?30?(????)???????Java???????????????????????????????????? ???????????Java Community Lead?Tori Wieldt?????????????????? ??: Java Source ???: Tori Wieldt Java?????????????????????????????????·????CVE-2012-4681?????????????????????·??????????Java??? ???????????????????????????3?????????????????????????CVE-2012-4681?CVE- 2012-1682?CVE-2012-3136?CVE-2012-0547???????????????????Java??????·????? ???????????????Java???????????????Oracle????·?????????????????????? Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin-top:0mm; mso-para-margin-right:0mm; mso-para-margin-bottom:auto; mso-para-margin-left:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Century","serif"; mso-ascii-font-family:Century; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Century; mso-hansi-theme-font:minor-latin; mso-font-kerning:1.0pt;} Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin-top:0mm; mso-para-margin-right:0mm; mso-para-margin-bottom:auto; mso-para-margin-left:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Century","serif"; mso-ascii-font-family:Century; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Century; mso-hansi-theme-font:minor-latin; mso-font-kerning:1.0pt;} ??????????????????????????????CVE-2012-4681??????????????????????????????????????????·???????????????????????????? Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin-top:0mm; mso-para-margin-right:0mm; mso-para-margin-bottom:auto; mso-para-margin-left:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Century","serif"; mso-ascii-font-family:Century; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Century; mso-hansi-theme-font:minor-latin; mso-font-kerning:1.0pt;} ????????????????????????????????? http://www.oracle.com/technetwork/jp/java/javase/downloads/index.html Java?????????????????????JRE?????????????? http://java.com Windows??????????????????????????? Java Automatic Update JUG???????John Yeary?????????????????????????????????????????????????????????????????????????????????? ??????????????????? ???? Oracle Security Alert for CVE-2012-4681 Change to Java SE 7 and Java SE 6 Update Release Numbers

    Read the article

  • l2tp / ipsec debian Openswan U2.6.38 does not connect

    - by locojay
    i am trying to get ipsec/l2tp running on a debian server with an iphone as a client but always get: Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [RFC 3947] method set to=115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: ignoring Vendor ID payload [FRAGMENTATION 80000000] Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [Dead Peer Detection] Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: responding to Main Mode from unknown peer <clientip> Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: STATE_MAIN_R1: sent MR1, expecting MI2 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): both are NATed Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: STATE_MAIN_R2: sent MR2, expecting MI3 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: Main mode peer ID is ID_IPV4_ADDR: '10.2.210.176' Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: switched from "L2TP-PSK-noNAT" to "L2TP-PSK-noNAT" Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: deleting connection "L2TP-PSK-noNAT" instance with peer <clientip> {isakmp=#0/ipsec=#0} Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: new NAT mapping for #20, was <clientip>:43598, now <clientip>:49826 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: Dead Peer Detection (RFC 3706): enabled Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: the peer proposed: <public ip>/32:17/1701 -> 10.2.210.176/32:17/0 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: responding to Quick Mode proposal {msgid:311d3282} Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: us: 171.138.2.13<171.138.2.13>:17/1701 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: them: <clientip>[10.2.210.176]:17/61719 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: Dead Peer Detection (RFC 3706): enabled Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x05e23c9a <0x216077a9 xfrm=AES_256-HMAC_SHA1 NATOA=10.2.210.176 NATD=<clientip>:49826 DPD=enabled} Dec 2 21:00:26 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received Delete SA(0x05e23c9a) payload: deleting IPSEC State #21 Dec 2 21:00:26 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received and ignored informational message Dec 2 21:00:27 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received Delete SA payload: deleting ISAKMP State #20 Dec 2 21:00:27 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip>: deleting connection "L2TP-PSK-noNAT" instance with peer <clientip> {isakmp=#0/ipsec=#0} Dec 2 21:00:27 vpn pluto[22711]: packet from <clientip>:49826: received and ignored informational message Dec 2 21:00:27 vpn pluto[22711]: ERROR: asynchronous network error report on eth0 (sport=4500) for message to <clientip> port 49826, complainant <clientip>: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)] my setup looks like this verizon fios actiontec -- DMZ-- ddwrt router -- debian xen instance actiontec : 192.168.1.1 ddwrt: 171.138.2.1 debian xen server: 171.138.2.13 forwarded udp 500, 4500, 1701 on ddwrt to debian xen instance. vpn passthrough is enabled /etc/ipsec.conf config setup dumpdir=/var/run/pluto/ nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:25.0.0.0/8,%v6:fd00::/8,%v6:fe80::/10,%v4:!171.138.2.0/24,%v4:!192.168.1.0/24 protostack=netkey # Add connections here conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 # we cannot rekey for %any, let client rekey rekey=no # Apple iOS doesn't send delete notify so we need dead peer detection # to detect vanishing clients dpddelay=30 dpdtimeout=120 dpdaction=clear # Set ikelifetime and keylife to same defaults windows has ikelifetime=8h keylife=1h # l2tp-over-ipsec is transport mode type=transport # left=171.138.2.13 # # For updated Windows 2000/XP clients, # to support old clients as well, use leftprotoport=17/%any leftprotoport=17/1701 # # The remote user. # right=%any # Using the magic port of "%any" means "any one single port". This is # a work around required for Apple OSX clients that use a randomly # high port. rightprotoport=17/%any #force all to be nat'ed. because of ios conn passthrough-for-non-l2tp type=passthrough left=171.138.2.13 leftnexthop=171.138.2.1 right=0.0.0.0 rightsubnet=0.0.0.0/0 auto=route /etc/xl2tp/xl2tp.conf [global] ipsec saref = no listen-addr = 171.138.2.13 ;port = 1701 ;debug network = yes ;debug tunnel = yes ;debug network = yes ;debug packet = yes [lns default] ip range = 171.138.2.231-171.138.2.239 local ip = 171.138.2.13 assign ip = yes require chap = no refuse pap = no require authentication = no ;name = OpenswanVPN ppp debug = yes pppoptfile = /etc/ppp/options.xlt2tpd lenght bit = yes /etc/ppp/options.xl2tpd ;require-mschap-v2 pcp-accept-local ipcp-accept-local ipcp-accept-remote ;ms-dns 171.138.2.1 ms-dns 192.168.1.1 ms-dns 8.8.8.8 name l2tpd noccp auth crtscts idle 1800 mtu 1410 mru 1410 lock proxyarp connect-delay 5000 debug dump logfd 2 logfile /var/log/xl2tpd.log ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.0.0-1-amd64 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Two or more interfaces found, checking IP forwarding [FAILED] Checking NAT and MASQUERADEing [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] The failed can be ignored i guess since cat /proc/sys/net/ipv4/ip_forward returns 1 any help would be much appreciated as i don't have any idea why this is not working

    Read the article

  • setup L2TP on Ubuntu 10.10

    - by luca
    I'm following this guide to setup a VPS on my Ubuntu VPS: http://riobard.com/blog/2010-04-30-l2tp-over-ipsec-ubuntu/ My config files are setup as in that guide, openswan version is 2.6.26 I think.. It doesn't work, I can show you my auth.log (on the VPS): Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [RFC 3947] method set to=109 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] method set to=110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [8f8d83826d246b6fc7a8a6a428c11de8] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [439b59f8ba676c4c7737ae22eab8f582] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [4d1e0e136deafa34c4f3ea9f02ec7285] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [80d0bb3def54565ee84645d4c85ce3ee] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [9909b64eed937c6573de52ace952fa6b] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [Dead Peer Detection] Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: responding to Main Mode from unknown peer 93.36.127.12 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: STATE_MAIN_R1: sent MR1, expecting MI2 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: STATE_MAIN_R2: sent MR2, expecting MI3 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: Main mode peer ID is ID_IPV4_ADDR: '10.0.1.8' Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: deleting connection "L2TP-PSK-NAT" instance with peer 93.36.127.12 {isakmp=#0/ipsec=#0} Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: new NAT mapping for #7, was 93.36.127.12:500, now 93.36.127.12:36810 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received and ignored informational message Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: the peer proposed: 69.147.233.173/32:17/1701 -> 10.0.1.8/32:17/0 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: responding to Quick Mode proposal {msgid:183463cf} Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: us: 69.147.233.173<69.147.233.173>[+S=C]:17/1701 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: them: 93.36.127.12[10.0.1.8,+S=C]:17/64111===10.0.1.8/32 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x0b1cf725 <0x0b719671 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=93.36.127.12:36810 DPD=none} Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received Delete SA(0x0b1cf725) payload: deleting IPSEC State #8 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: netlink recvfrom() of response to our XFRM_MSG_DELPOLICY message for policy eroute_connection delete was too long: 100 > 36 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: netlink recvfrom() of response to our XFRM_MSG_DELPOLICY message for policy [email protected] was too long: 168 > 36 Feb 18 06:11:28 maverick pluto[6909]: | raw_eroute result=0 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received and ignored informational message Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received Delete SA payload: deleting ISAKMP State #7 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12: deleting connection "L2TP-PSK-NAT" instance with peer 93.36.127.12 {isakmp=#0/ipsec=#0} Feb 18 06:11:28 maverick pluto[6909]: packet from 93.36.127.12:36810: received and ignored informational message and my system log on OSX (from where I'm connecting): Feb 18 13:11:09 luca-ciorias-MacBook-Pro pppd[68656]: pppd 2.4.2 (Apple version 412.3) started by luca, uid 501 Feb 18 13:11:09 luca-ciorias-MacBook-Pro pppd[68656]: L2TP connecting to server '69.147.233.173' (69.147.233.173)... Feb 18 13:11:09 luca-ciorias-MacBook-Pro pppd[68656]: IPSec connection started Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: Connecting. Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Information message). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Information-Notice: transmit success. (ISAKMP-SA). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: Connected. Feb 18 13:11:10 luca-ciorias-MacBook-Pro pppd[68656]: IPSec connection established Feb 18 13:11:30 luca-ciorias-MacBook-Pro pppd[68656]: L2TP cannot connect to the server Feb 18 13:11:30 luca-ciorias-MacBook-Pro configd[20]: SCNCController: Disconnecting. (Connection tried to negotiate for, 22 seconds). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Information message). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Information message). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Feb 18 13:11:31 luca-ciorias-MacBook-Pro racoon[68453]: Disconnecting. (Connection was up for, 20.157953 seconds).

    Read the article

  • Varnish default.vcl grace period

    - by Vladimir
    These are my settings for a grace period (/etc/varnish/default.vcl) sub vcl_recv { .... set req.grace = 360000s; ... } sub vcl_fetch { ... set beresp.grace = 360000s; ... } I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1890127100 Varnish cache server Could you tell me what could be the problem? sub vcl_fetch { if (beresp.ttl < 120s) { ##std.log("Adjusting TTL"); set beresp.ttl = 36000s; ##120s; } # Do not cache the object if the backend application does not want us to. if (beresp.http.Cache-Control ~ "(no-cache|no-store|private|must-revalidate)") { return(hit_for_pass); } # Do not cache the object if the status is not in the 200s if (beresp.status >= 300) { # Remove the Set-Cookie header #remove beresp.http.Set-Cookie; return(hit_for_pass); } # # Everything below here should be cached # # Remove the Set-Cookie header ####remove beresp.http.Set-Cookie; # Set the grace time ## set beresp.grace = 1s; //change this to minutes in case of app shutdown set beresp.grace = 360000s; ## 10 hour - reduce if it has negative impact # Static assets - browser caches tpiphem for a long time. if (req.url ~ "\.(css|js|.js|jpg|jpeg|gif|ico|png)\??\d*$") { /* Remove Expires from backend, it's not long enough */ unset beresp.http.expires; /* Set the clients TTL on this object */ set beresp.http.cache-control = "public, max-age=31536000"; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } else { set beresp.http.Cache-Control = "private, max-age=0, must-revalidate"; set beresp.http.Pragma = "no-cache"; } if (req.url ~ "\.(css|js|min|)\??\d*$") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ##do not duplicate these settings if (req.url ~ ".css") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".js") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".min") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (beresp.status != 200 && beresp.status != 403 && beresp.status != 404) { return(restart); } if (beresp.status == 302) { return(deliver); } # Never cache posts if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(hit_for_pass); } ##check this setting to ensure that it does not cause issues for browsers with no gzip if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } if (beresp.http.Set-Cookie) { return(deliver); } ##if (req.url == "/index.html") { set beresp.do_esi = true; ##} ## check if this is needed or should be used # return(deliver); the object return(deliver); } sub vcl_recv { ##avoid leeching of images call hot_link; set req.grace = 360000s; ##2m ## if one backend is down - use another if (req.restarts == 0) { set req.backend = cache_director; ##we can specify individual VMs } else if (req.restarts == 1) { set req.backend = cache_director; } ## post calls should not be cached - add cookie for these requests if using micro-caching # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); ## return(pass) goes to backend - not cache } # Don't cache the result of a redirect if (req.http.Referer ~ "redir" || req.http.Origin ~ "jumpto") { return(pass); } # Don't cache the result of a redirect (asking for logon) if (req.http.Referer ~ "post" || req.http.Referer ~ "submit" || req.http.Referer ~ "add" || req.http.Referer ~ "ask") { return(pass); } # Never cache posts - ensure that we do not use these strings in our URLs' that need to be cached if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(pass); } ## if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } # Handle compression correctly. Different browsers send different # "Accept-Encoding" headers, even though they mostly all support the same # compression mechanisms. By consolidating these compression headers into # a consistent format, we can reduce the size of the cache and get more hits. # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|ico)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } # lookup graphics, css, js & ico files in the cache if (req.url ~ "\.(png|gif|jpg|jpeg|css|.js|ico)$") { return(lookup); } ##added on 0918 - check if it causes issues with user specific content if (req.request == "GET" && req.http.cookie) { return(lookup); } # Pipe requests that are non-RFC2616 or CONNECT which is weird. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { ##closing connection and calling pipe return(pipe); } ##purge content via localhost only if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } ## do we need this? ## return(lookup); }

    Read the article

  • Trouble connecting to vsftpd on ubuntu server

    - by littleK
    I have installed Ubuntu Server 10.10 and I am using it to host a domain that I have. I am trying to set up FTP for the server, but I am running into some problems. I have successfully installed vsFTPd and I have opened up ports 20, 21 on my firewall. In my vsFTPd configuration, I have enabled SSL. Every time I try to connect to my server via FTP, I receive a "Connection Refused" error. I have had a little more success with SSL disabled, however the connection process will time out after the LIST command (but it does accept my authentication). Here is my vsFTPd configuration, the SSL stuff is at the bottom: # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. #xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. #chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem # SSL ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES Thanks!

    Read the article

  • Can't Get Virtual Users Setup in VSFTPD -Tried Everything

    - by N.T.
    Have Ubuntu 11.10 with vsftpd installed and working. Can not get virtual users setup at all? Vsftpd will allow main Ubuntu owner account to login, but nothing else? I've followed several tutorials on adding virtual users, but nothing works? I just need to add 2 virtual users and have them be able to upload files to vsftpd Ubuntu computer from other computers on my Lan network. Everywhere I've looked, people just point toward tutorials on adding virtual users, but that just is NOT working. I've been struggling with this for over a week now! PLEASE Help. Thanks. I'll even give a donation if someone can figure this out. here is the vsftpd.conf file I am using. I copied the original, and make a new one, every time I try a tutorial. So far, none have worked. Here is the vsftpd.conf file I'm using. (I hope this helps?) # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to Sage FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd local_root=/media/FilesDrive # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Setup VPN issue on Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >