Search Results

Search found 17550 results on 702 pages for 'real world'.

Page 669/702 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • Secure Your Wireless Router: 8 Things You Can Do Right Now

    - by Chris Hoffman
    A security researcher recently discovered a backdoor in many D-Link routers, allowing anyone to access the router without knowing the username or password. This isn’t the first router security issue and won’t be the last. To protect yourself, you should ensure that your router is configured securely. This is about more than just enabling Wi-Fi encryption and not hosting an open Wi-Fi network. Disable Remote Access Routers offer a web interface, allowing you to configure them through a browser. The router runs a web server and makes this web page available when you’re on the router’s local network. However, most routers offer a “remote access” feature that allows you to access this web interface from anywhere in the world. Even if you set a username and password, if you have a D-Link router affected by this vulnerability, anyone would be able to log in without any credentials. If you have remote access disabled, you’d be safe from people remotely accessing your router and tampering with it. To do this, open your router’s web interface and look for the “Remote Access,” “Remote Administration,” or “Remote Management” feature. Ensure it’s disabled — it should be disabled by default on most routers, but it’s good to check. Update the Firmware Like our operating systems, web browsers, and every other piece of software we use, router software isn’t perfect. The router’s firmware — essentially the software running on the router — may have security flaws. Router manufacturers may release firmware updates that fix such security holes, although they quickly discontinue support for most routers and move on to the next models. Unfortunately, most routers don’t have an auto-update feature like Windows and our web browsers do — you have to check your router manufacturer’s website for a firmware update and install it manually via the router’s web interface. Check to be sure your router has the latest available firmware installed. Change Default Login Credentials Many routers have default login credentials that are fairly obvious, such as the password “admin”. If someone gained access to your router’s web interface through some sort of vulnerability or just by logging onto your Wi-Fi network, it would be easy to log in and tamper with the router’s settings. To avoid this, change the router’s password to a non-default password that an attacker couldn’t easily guess. Some routers even allow you to change the username you use to log into your router. Lock Down Wi-Fi Access If someone gains access to your Wi-Fi network, they could attempt to tamper with your router — or just do other bad things like snoop on your local file shares or use your connection to downloaded copyrighted content and get you in trouble. Running an open Wi-Fi network can be dangerous. To prevent this, ensure your router’s Wi-Fi is secure. This is pretty simple: Set it to use WPA2 encryption and use a reasonably secure passphrase. Don’t use the weaker WEP encryption or set an obvious passphrase like “password”. Disable UPnP A variety of UPnP flaws have been found in consumer routers. Tens of millions of consumer routers respond to UPnP requests from the Internet, allowing attackers on the Internet to remotely configure your router. Flash applets in your browser could use UPnP to open ports, making your computer more vulnerable. UPnP is fairly insecure for a variety of reasons. To avoid UPnP-based problems, disable UPnP on your router via its web interface. If you use software that needs ports forwarded — such as a BitTorrent client, game server, or communications program — you’ll have to forward ports on your router without relying on UPnP. Log Out of the Router’s Web Interface When You’re Done Configuring It Cross site scripting (XSS) flaws have been found in some routers. A router with such an XSS flaw could be controlled by a malicious web page, allowing the web page to configure settings while you’re logged in. If your router is using its default username and password, it would be easy for the malicious web page to gain access. Even if you changed your router’s password, it would be theoretically possible for a website to use your logged-in session to access your router and modify its settings. To prevent this, just log out of your router when you’re done configuring it — if you can’t do that, you may want to clear your browser cookies. This isn’t something to be too paranoid about, but logging out of your router when you’re done using it is a quick and easy thing to do. Change the Router’s Local IP Address If you’re really paranoid, you may be able to change your router’s local IP address. For example, if its default address is 192.168.0.1, you could change it to 192.168.0.150. If the router itself were vulnerable and some sort of malicious script in your web browser attempted to exploit a cross site scripting vulnerability, accessing known-vulnerable routers at their local IP address and tampering with them, the attack would fail. This step isn’t completely necessary, especially since it wouldn’t protect against local attackers — if someone were on your network or software was running on your PC, they’d be able to determine your router’s IP address and connect to it. Install Third-Party Firmwares If you’re really worried about security, you could also install a third-party firmware such as DD-WRT or OpenWRT. You won’t find obscure back doors added by the router’s manufacturer in these alternative firmwares. Consumer routers are shaping up to be a perfect storm of security problems — they’re not automatically updated with new security patches, they’re connected directly to the Internet, manufacturers quickly stop supporting them, and many consumer routers seem to be full of bad code that leads to UPnP exploits and easy-to-exploit backdoors. It’s smart to take some basic precautions. Image Credit: Nuscreen on Flickr     

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • Screenshot Tour: Ubuntu Touch 14.04 on a Nexus 7

    - by Chris Hoffman
    Ubuntu 14.04 LTS will “form the basis of the first commercially available Ubuntu tablets,” according to Canonical. We installed Ubuntu Touch 14.04 on our own hardware to see what those tablets will be like. We don’t recommend installing this yourself, as it’s still not a polished, complete experience. We’re using “Ubuntu Touch” as shorthand here — apparently this project’s new name is “Ubuntu For Devices.” The Welcome Screen Ubuntu’s touch interface is all about edge swipes and hidden interface elements — it has a lot in common with Windows 8, actually. You’ll see the welcome screen when you boot up or unlock a Ubuntu tablet or phone. If you have new emails, text messages, or other information, it will appear on this screen along with the time and date. If you don’t, you’ll just see a message saying “No data sources available.” The Dash Swipe in from the right edge of the welcome screen to access the Dash, or home screen. This is actually very similar to the Dash on Ubuntu’s Unity desktop. This isn’t a surprise — Canonical wants the desktop and touch versions of Ubuntu to use the same code. In the future, the desktop and touch versions of Ubuntu will use the same version of Unity and Unity will adjust its interface depending on what type of device your’e using. Here you’ll find apps you have installed and apps available to install. Tap an installed app to launch it or tap an available app to view more details and install it. Tap the My apps or Available headings to view a complete list of apps you have installed or apps you can install. Tap the Search box at the top of the screen to start searching — this is how you’d search for new apps to install. As you’d expect, a touch keyboard appears when you tap in the Search field or any other text field. The launcher isn’t just for apps. Tap the Apps heading at the top of the screen and you’ll see hidden text appear — Music, Video, and Scopes. This hidden navigation is used throughout Ubuntu’s different apps and can be easy to miss at first. Swipe to the left or right to move between these screens. These screens are also similar to the different panels in Unity on the desktop. The Scopes section allows you to view different search scopes you have installed. These are used to search different sources when you start a search from the Dash. Search from the Music or Videos scopes to search for local media files on your device or media files online. For example, searching in the Music scope will show you music results from Grooveshark by default. Navigating Ubuntu Touch Swipe in from the left edge anywhere on the system to open the launcher, a bar with shortcuts to apps. This launcher is very similar to the launcher on the left of Ubuntu’s Unity desktop — that’s the whole idea, after all. Once you’ve opened an app, you can leave the app by swiping in from the left. The launcher will appear — keep moving your finger towards the right edge of teh screen. This will swipe the current app off the screen, taking you back to the Dash. Once back on the Dash, you’ll see your open apps represented as thumbnails under Recent. Tap a thumbnail here to go back to a running app. To remove an app from here, long-press it and tap the X button that appears. Swipe in from the right edge in any app to quickly switch between recent apps. Swipe in from the right edge and hold your finger down to reveal an application switcher that shows all your recent apps and lets you choose between them. Swipe down from the top of the screen to access the indicator panel. Here you can connect to Wi-Fi networks, view upcoming events, control GPS and Bluetooth hardware, adjust sound settings, see incoming messages, and more. This panel is for quick access to hardware settings and notifications, just like the indicators on Ubuntu’s Unity desktop. The Apps System settings not included in the pull-down panel are available in the System Settings app. To access it, tap My apps on the Dash and tap System Settings, search for the System Settings app, or open the launcher bar and tap the settings icon. The settings here a bit limited compared to other operating systems, but many of the important options are available here. You can add Evernote, Ubuntu One, Twitter, Facebook, and Google accounts from here. A free Ubuntu One account is mandatory for downloading and updating apps. A Google account can be used to sync contacts and calendar events. Some apps on Ubuntu are native apps, while many are web apps. For example, the Twitter, Gmail, Amazon, Facebook, and eBay apps included by default are all web apps that open each service’s mobile website as an app. Other applications, such as the Weather, Calendar, Dialer, Calculator, and Notes apps are native applications. Theoretically, both types of apps will be able to scale to different screen resolutions. Ubuntu Touch and Ubuntu desktop may one day share the same apps, which will adapt to different display sizes and input methods. Like Windows 8 apps, Ubuntu apps hide interface elements by default, providing you with a full-screen view of the content. Swipe up from the bottom of an app’s screen to view its interface elements. For example, swiping up from the bottom of the Web Browser app reveals Back, Forward, and Refresh buttons, along with an address bar and Activity button so you can view current and recent web pages. Swipe up even more from the bottom and you’ll see a button hovering in the middle of the app. Tap the button and you’ll see many more settings. This is an overflow area for application options and functions that can’t fit on the navigation bar. The Terminal app has a few surprising Easter eggs in this panel, including a “Hack into the NSA” option. Tap it and the following text will appear in the terminal: That’s not very nice, now tracing your location . . . . . . . . . . . .Trace failed You got away this time, but don’t try again. We’d expect to see such Easter eggs disappear before Ubuntu Touch actually ships on real devices. Ubuntu Touch has come a long way, but it’s still not something you want to use today. For example, it doesn’t even have a built-in email client — you’ll have to us your email service’s mobile website. Few apps are available, and many of the ones that are are just mobile websites. It’s not a polished operating system intended for normal users yet — it’s more of a preview for developers and device manufacturers. If you really want to try it yourself, you can install it on a Wi-Fi Nexus 7 (2013), Nexus 10, or Nexus 4 device. Follow Ubuntu’s installation instructions here.

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • What's up with LDoms: Part 5 - A few Words about Consoles

    - by Stefan Hinker
    Back again to look at a detail of LDom configuration that is often forgotten - the virtual console server. Remember, LDoms are SPARC systems.  As such, each guest will have it's own OBP running.  And to connect to that OBP, the administrator will need a console connection.  Since it's OBP, and not some x86 BIOS, this console will be very serial in nature ;-)  It's really very much like in the good old days, where we had a terminal concentrator where all those serial cables ended up in.  Just like with other components in LDoms, the virtualized solution looks very similar. Every LDom guest requires exactly one console connection.  Envision this similar to the RS-232 port on older SPARC systems.  The LDom framework provides one or more console services that provide access to these connections.  This would be the virtual equivalent of a network terminal server (NTS), where all those serial cables are plugged in.  In the physical world, we'd have a list somewhere, that would tell us which TCP-Port of the NTS was connected to which server.  "ldm list" does just that: root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 0.4% 27d 8h 22m jupiter bound ------ 5002 20 8G mars active -n---- 5000 2 8G 0.5% 55d 14h 10m venus active -n---- 5001 2 8G 0.5% 56d 40m pluto inactive ------ 4 4G The column marked "CONS" tells us, where to reach the console of each domain. In the case of the primary domain, this is actually a (more) physical connection - it's the console connection of the physical system, which is either reachable via the ILOM of that system, or directly via the serial console port on the chassis. All the other guests are reachable through the console service which we created during the inital setup of the system.  Note that pluto does not have a port assigned.  This is because pluto is not yet bound.  (Binding can be viewed very much as the assembly of computer parts - CPU, Memory, disks, network adapters and a serial console cable are all put together when binding the domain.)  Unless we set the port number explicitly, LDoms Manager will do this on a first come, first serve basis.  For just a few domains, this is fine.  For larger deployments, it might be a good idea to assign these port numbers manually using the "ldm set-vcons" command.  However, there is even better magic associated with virtual consoles. You can group several domains into one console group, reachable through one TCP port of the console service.  This can be useful when several groups of administrators are to be given access to different domains, or for other grouping reasons.  Here's an example: root@sun # ldm set-vcons group=planets service=console jupiter root@sun # ldm set-vcons group=planets service=console pluto root@sun # ldm bind jupiter root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 6.1% 27d 8h 24m jupiter bound ------ 5002 200 8G mars active -n---- 5000 2 8G 0.6% 55d 14h 12m pluto bound ------ 5002 4 4G venus active -n---- 5001 2 8G 0.5% 56d 42m root@sun # telnet localhost 5002 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. sun-vnts-planets: h, l, c{id}, n{name}, q:l DOMAIN ID DOMAIN NAME DOMAIN STATE 2 jupiter online 3 pluto online sun-vnts-planets: h, l, c{id}, n{name}, q:npluto Connecting to console "pluto" in group "planets" .... Press ~? for control options .. What I did here was add the two domains pluto and jupiter to a new console group called "planets" on the service "console" running in the primary domain.  Simply using a group name will create such a group, if it doesn't already exist.  By default, each domain has its own group, using the domain name as the group name.  The group will be available on port 5002, chosen by LDoms Manager because I didn't specify it.  If I connect to that console group, I will now first be prompted to choose the domain I want to connect to from a little menu. Finally, here's an example how to assign port numbers explicitly: root@sun # ldm set-vcons port=5044 group=pluto service=console pluto root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 3.8% 27d 8h 54m jupiter active -t---- 5002 200 8G 0.5% 30m mars active -n---- 5000 2 8G 0.6% 55d 14h 43m pluto bound ------ 5044 4 4G venus active -n---- 5001 2 8G 0.4% 56d 1h 13m With this, pluto would always be reachable on port 5044 in its own exclusive console group, no matter in which order other domains are bound. Now, you might be wondering why we always have to mention the console service name, "console" in all the examples here.  The simple answer is because there could be more than one such console service.  For all "normal" use, a single console service is absolutely sufficient.  But the system is flexible enough to allow more than that single one, should you need them.  In fact, you could even configure such a console service on a domain other than the primary (or control domain), which would make that domain a real console server.  I actually have a customer who does just that - they want to separate console access from the control domain functionality.  But this is definately a rather sophisticated setup. Something I don't want to go into in this post is access control.  vntsd, which is the daemon providing all these console services, is fully RBAC-aware, and you can configure authorizations for individual users to connect to console groups or individual domain's consoles.  If you can't wait until I get around to security, check out the man page of vntsd. Further reading: The Admin Guide is rather reserved on this subject.  I do recommend to check out the Reference Manual. The manpage for vntsd will discuss all the control sequences as well as the grouping and authorizations mentioned here.

    Read the article

  • SharePoint logging to a list

    - by Norgean
    I recently worked in an environment with several servers. Locating the correct SharePoint log file for error messages, or development trace calls, is cumbersome. And once the solution hit the cloud, it got even worse, as we had no access to the log files at all. Obviously we are not the only ones with this problem, and the current trend seems to be to log to a list. This had become an off-hour project, so rather than do the sensible thing and find a ready-made solution, I decided to do it the hard way. So! Fire up Visual Studio, create yet another empty SharePoint solution, and start to think of some requirements. Easy on/offI want to be able to turn list-logging on and off.Easy loggingFor me, this means being able to use string.Format.Easy filteringLet's have the possibility to add some filtering columns; category and severity, where severity can be "verbose", "warning" or "error". Easy on/off Well, that's easy. Create a new web feature. Add an event receiver, and create the list on activation of the feature. Tear the list down on de-activation. I chose not to create a new content type; I did not feel that it would give me anything extra. I based the list on the generic list - I think a better choice would have been the announcement type. Approximately: public void CreateLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListName);             if (list == null)             {                 var listGuid = web.Lists.Add(LogListName, "Logging for the masses", SPListTemplateType.GenericList);                 list = web.Lists[listGuid];                 list.Title = LogListTitle;                 list.Update();                 list.Fields.Add(Category, SPFieldType.Text, false);                 var stringColl = new StringCollection();                 stringColl.AddRange(new[]{Error, Information, Verbose});                 list.Fields.Add(Severity, SPFieldType.Choice, true, false, stringColl);                 ModifyDefaultView(list);             }         }Should be self explanatory, but: only create the list if it does not already exist (d'oh). Best practice: create it with a Url-friendly name, and, if necessary, give it a better title. ...because otherwise you'll have to look for a list with a name like "Simple_x0020_Log". I've added a couple of fields; a field for category, and a 'severity'. Both to make it easier to find relevant log messages. Notice that I don't have to call list.Update() after adding the fields - this would cause a nasty error (something along the lines of "List locked by another user"). The function for deleting the log is exactly as onerous as you'd expect:         public void DeleteLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListTitle);             if (list != null)             {                 list.Delete();             }         } So! "All" that remains is to log. Also known as adding items to a list. Lots of different methods with different signatures end up calling the same function. For example, LogVerbose(web, message) calls LogVerbose(web, null, message) which again calls another method which calls: private static void Log(SPWeb web, string category, string severity, string textformat, params object[] texts)         {             if (web != null)             {                 var list = web.Lists.TryGetList(LogListTitle);                 if (list != null)                 {                     var item = list.AddItem(); // NOTE! NOT list.Items.Add… just don't, mkay?                     var text = string.Format(textformat, texts);                     if (text.Length > 255) // because the title field only holds so many chars. Sigh.                         text = text.Substring(0, 254);                     item[SPBuiltInFieldId.Title] = text;                     item[Degree] = severity;                     item[Category] = category;                     item.Update();                 }             } // omitted: Also log to SharePoint log.         } By adding a params parameter I can call it as if I was doing a Console.WriteLine: LogVerbose(web, "demo", "{0} {1}{2}", "hello", "world", '!'); Ok, that was a silly example, a better one might be: LogError(web, LogCategory, "Exception caught when updating {0}. exception: {1}", listItem.Title, ex); For performance reasons I use list.AddItem rather than list.Items.Add. For completeness' sake, let us include the "ModifyDefaultView" function that I deliberately skipped earlier.         private void ModifyDefaultView(SPList list)         {             // Add fields to default view             var defaultView = list.DefaultView;             var exists = defaultView.ViewFields.Cast<string>().Any(field => String.CompareOrdinal(field, Severity) == 0);               if (!exists)             {                 var field = list.Fields.GetFieldByInternalName(Severity);                 if (field != null)                     defaultView.ViewFields.Add(field);                 field = list.Fields.GetFieldByInternalName(Category);                 if (field != null)                     defaultView.ViewFields.Add(field);                 defaultView.Update();                   var sortDoc = new XmlDocument();                 sortDoc.LoadXml(string.Format("<Query>{0}</Query>", defaultView.Query));                 var orderBy = (XmlElement) sortDoc.SelectSingleNode("//OrderBy");                 if (orderBy != null && sortDoc.DocumentElement != null)                     sortDoc.DocumentElement.RemoveChild(orderBy);                 orderBy = sortDoc.CreateElement("OrderBy");                 sortDoc.DocumentElement.AppendChild(orderBy);                 field = list.Fields[SPBuiltInFieldId.Modified];                 var fieldRef = sortDoc.CreateElement("FieldRef");                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                   fieldRef = sortDoc.CreateElement("FieldRef");                 field = list.Fields[SPBuiltInFieldId.ID];                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                 defaultView.Query = sortDoc.DocumentElement.InnerXml;                 //defaultView.Query = "<OrderBy><FieldRef Name='Modified' Ascending='FALSE' /><FieldRef Name='ID' Ascending='FALSE' /></OrderBy>";                 defaultView.Update();             }         } First two lines are easy - see if the default view includes the "Severity" column. If it does - quit; our job here is done.Adding "severity" and "Category" to the view is not exactly rocket science. But then? Then we build the sort order query. Through XML. The lines are numerous, but boring. All to achieve the CAML query which is commented out. The major benefit of using the dom to build XML, is that you may get compile time errors for spelling mistakes. I say 'may', because although the compiler will not let you forget to close a tag, it will cheerfully let you spell "Name" as "Naem". Whichever you prefer, at the end of the day the view will sort by modified date and ID, both descending. I added the ID as there may be several items with the same time stamp. So! Simple logging to a list, with sensible a view, and with normal functionality for creating your own filterings. I should probably have added some more views in code, ready filtered for "only errors", "errors and warnings" etc. And it would be nice to block verbose logging completely, but I'm not happy with the alternatives. (yetanotherfeature or an admin page seem like overkill - perhaps just removing it as one of the choices, and not log if it isn't there?) Before you comment - yes, try-catches have been removed for clarity. There is nothing worse than having a logging function that breaks your site!

    Read the article

  • SOA Implementation Challenges

    Why do companies think that if they put up a web service that they are doing Service-Oriented Architecture (SOA)? Unfortunately, the IT and business world love to run on the latest hype or buzz words of which very few even understand the meaning. One of the largest issues companies have today as they consider going down the path of SOA, is the lack of knowledge regarding the architectural style and the over usage of the term SOA. So how do we solve this issue?I am sure most of you are thinking by now that you know what SOA is because you developed a few web services.  Isn’t that SOA, right? No, that is not SOA, but instead Just Another Web Service (JAWS). For us to better understand what SOA is let’s look at a few definitions.Douglas K. Bary defines service-oriented architecture as a collection of services. These services are enabled to communicate with each other in order to pass data or coordinating some activity with other services.If you look at this definition closely you will notice that Bary states that services communicate with each other. Let us compare this statement with my first statement regarding companies that claim to be doing SOA when they have just a collection of web services. In order for these web services to for an SOA application they need to be interdependent on one another forming some sort of architectural hierarchy. Just because a company has a few web services does not mean that they are all interconnected.SearchSOA from TechTarget.com states that SOA defines how two computing entities work collectively to enable one entity to perform a unit of work on behalf of another. Once again, just because a company has a few web services does not guarantee that they are even working together let alone if they are performing work for each other.SearchSOA also points out service interactions should be self-contained and loosely-coupled so that all interactions operate independent of each other.Of all the definitions regarding SOA Thomas Erl’s seems to shed the most light on this concept. He states that “SOA establishes an architectural model that aims to enhance the efficiency, agility, and productivity of an enterprise by positioning services as the primary means through which solution logic is represented in support of the realization of the strategic goals associated with service-oriented computing.” (Erl, 2011) Once again this definition proves that a collection of web services does not mean that a company is doing SOA. However, it does mean that a company has a collection of web services, and that is it.In order for a company to start to go down the path of SOA, they must take  a hard look at their existing business process while abstracting away any technology so that they can define what is they really want to accomplish. Once a company has done this, they can begin to factor out common sub business process like credit card process, user authentication or system notifications in to small components that can be built independent of each other and then reassembled to form new and dynamic services that are loosely coupled and agile in that they can change as a business grows.Another key pitfall of companies doing SOA is the fact that they let vendors drive their architecture. Why do companies do this? Vendors’ do not hold your company’s success as their top priority; in fact they hold their own success as their top priority by selling you as much stuff as you are willing to buy. In my experience companies tend to strive for the maximum amount of benefits with a minimal amount of cost. Does anyone else see any conflicts between this and the driving force behind vendors.Mike Kavis recommends in an article written in CIO.com that companies need to figure out what they need before they talk to a vendor or at least have some idea of what they need. It is important to thoroughly evaluate each vendor and watch them perform a live demo of their system so that you as the company fully understand what kind of product or service the vendor is actually offering. In addition, do research on each vendor that you are considering, check out blog posts, online reviews, and any information you can find on the vendor through various search engines.Finally he recommends companies to verify any recommendations supplied by a vendor. From personal experience this is very important. I can remember when the company I worked for purchased a $200,000 add-on to their phone system that never actually worked as it was intended. In fact, just after my departure from the company started the process of attempting to get their money back from the vendor. This potentially could have been avoided if the company had done the research before selecting this vendor to ensure that their product and vendor would live up to their claims. I know that some SOA vendor offer free training regarding SOA because they know that there are a lot of misconceptions about the topic. Superficially this is a great thing for companies to take part in especially if the company is starting to implement SOA architecture and are still unsure about some topics or are looking for some guidance regarding the topic. However beware that some companies will focus on their product line only regarding the training. As an example, InfoWorld.com claims that companies providing deep seminars disguised as training, focusing more about ESBs and SOA governance technology, and less on how to approach and solve the architectural issues of the attendees.In short, it is important to remember that we as software professionals are responsible for guiding a business’s technology sections should be well informed and fully understand any new concepts that may be considered for implementation. As I have demonstrated already a company that has a few web services does not mean that they are doing SOA.  Additionally, we must not let the new buzz word of the day drive our technology, but instead our technology decisions should be driven from research and proven experience. Finally, it is important to rely on vendors when necessary, however, always take what they say with a grain of salt while cross checking any claims that they may make because we have to live with the aftermath of a system after the vendors are gone.   References: Barry, D. K. (2011). Service-oriented architecture (SOA) definition. Retrieved 12 12, 2011, from Service-Architecture.com: http://www.service-architecture.com/web-services/articles/service-oriented_architecture_soa_definition.html Connell, B. (2003, 9). service-oriented architecture (SOA). Retrieved 12 12, 2011, from SearchSOA: http://searchsoa.techtarget.com/definition/service-oriented-architecture Erl, T. (2011, 12 12). Service-Oriented Architecture. Retrieved 12 12, 2011, from WhatIsSOA: http://www.whatissoa.com/p10.php InfoWorld. (2008, 6 1). Should you get your SOA knowledge from SOA vendors? . Retrieved 12 12, 2011, from InfoWorld.com: http://www.infoworld.com/d/architecture/should-you-get-your-soa-knowledge-soa-vendors-453 Kavis, M. (2008, 6 18). Top 10 Reasons Why People are Making SOA Fail. Retrieved 12 13, 2011, from CIO.com: http://www.cio.com/article/438413/Top_10_Reasons_Why_People_are_Making_SOA_Fail?page=5&taxonomyId=3016  

    Read the article

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • RTS style fog of war woes

    - by Fricken Hamster
    So I'm trying to make a rts style line of sight fog of war style engine for my grid based game. Currently I am getting a set of vertices by raycasting in 360 degree. Then I use that list of vertices to do a graphics style polygon scanline fill to get a list of all points within the polygon. The I compare the new list of seen tiles and compare that with the old one and increment or decrement the world vision array as needed. The polygon scanline function is giving me trouble. I'm mostly following this http://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/PolygonFilling.html So far this is my code without cleaning anything up var edgeMinX:Vector.<int> = new Vector.<int>; var edgeMinY:Vector.<int> = new Vector.<int>; var edgeMaxY:Vector.<int> = new Vector.<int>; var edgeInvSlope:Vector.<Number> = new Vector.<Number>; var ilen:int = outvert.length; var miny:int = -1; var maxy:int = -1; for (i = 0; i < ilen; i++) { var curpoint:Point = outvert[i]; if (i == ilen -1) { var nextpoint:Point = outvert[0]; } else { nextpoint = outvert[i + 1]; } if (nextpoint.y == curpoint.y) { continue; } if (curpoint.y < nextpoint.y) { var curslope:Number = ((nextpoint.y - curpoint.y) / (nextpoint.x - curpoint.x)); edgeMinY.push(curpoint.y); edgeMinX.push(curpoint.x); edgeMaxY.push(nextpoint.y); edgeInvSlope.push(1 / curslope); if (curpoint.y < miny || miny == -1) { miny = curpoint.y; } if (nextpoint.y > maxy) { maxy = nextpoint.y; } } else { curslope = ((curpoint.y - nextpoint.y) / (curpoint.x - nextpoint.x)); edgeMinY.push(nextpoint.y); edgeMinX.push(nextpoint.x); edgeMaxY.push(curpoint.y); edgeInvSlope.push(1 / curslope); if (nextpoint.y < miny || miny == -1) { miny = curpoint.y; } if (curpoint.y > maxy) { maxy = nextpoint.y; } } } var activeMaxY:Vector.<int> = new Vector.<int>; var activeCurX:Vector.<Number> = new Vector.<Number>; var activeInvSlope:Vector.<Number> = new Vector.<Number>; for (var scanline:int = miny; scanline < maxy + 1; scanline++) { ilen = edgeMinY.length; for (i = 0; i < ilen; i++) { if (edgeMinY[i] == scanline) { activeMaxY.push(edgeMaxY[i]); activeCurX.push(edgeMinX[i]); activeInvSlope.push(edgeInvSlope[i]); //trace("added(" + edgeMinX[i]); edgeMaxY.splice(i, 1); edgeMinX.splice(i, 1); edgeMinY.splice(i, 1); edgeInvSlope.splice(i, 1); i--; ilen--; } } ilen = activeCurX.length; for (i = 0; i < ilen - 1; i++) { for (var j:int = i; j < ilen - 1; j++) { if (activeCurX[j] > activeCurX[j + 1]) { var tempint:int = activeMaxY[j]; activeMaxY[j] = activeMaxY[j + 1]; activeMaxY[j + 1] = tempint; var tempnum:Number = activeCurX[j]; activeCurX[j] = activeCurX[j + 1]; activeCurX[j + 1] = tempnum; tempnum = activeInvSlope[j]; activeInvSlope[j] = activeInvSlope[j + 1]; activeInvSlope[j + 1] = tempnum; } } } var prevx:int = -1; var jlen:int = activeCurX.length; for (j = 0; j < jlen; j++) { if (prevx == -1) { prevx = activeCurX[j]; } else { for (var k:int = prevx; k < activeCurX[j]; k++) { graphics.lineStyle(2, 0x124132); graphics.drawCircle(k * 20 + 10, scanline * 20 + 10, 5); if (k == prevx || k > activeCurX[j] - 1) { graphics.lineStyle(3, 0x004132); graphics.drawCircle(k * 20 + 10, scanline * 20 + 10, 2); } prevx = -1; //tileLightList.push(k, scanline); } } } ilen = activeCurX.length; for (i = 0; i < ilen; i++) { if (activeMaxY[i] == scanline + 1) { activeCurX.splice(i, 1); activeMaxY.splice(i, 1); activeInvSlope.splice(i, 1); i--; ilen--; } else { activeCurX[i] += activeInvSlope[i]; } } } It works in some cases but some of the x intersections are skipped, primarily when there are more than 2 x intersections in one scanline I think. Is there a way to fix this, or a better way to do what I described? Thanks

    Read the article

  • CodePlex Daily Summary for Wednesday, September 05, 2012

    CodePlex Daily Summary for Wednesday, September 05, 2012Popular ReleasesDesktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.BLACK ORANGE: HPAD TEXT EDITOR 0.9 Beta: HOW TO RUN THE TEXT EDITOR Download the HPAD ARCHIVED FILES which is in .rar format Extract using Winrar Make sure that extracted files are in the same folder Double-Click on HPAD.exe application fileTelerikMvcGridCustomBindingHelper: Version 1.0.15.247-RC2: TelerikMvcGridCustomBindingHelper 1.0.15.247 RC2 Release notes: This is a RC version (hopefully the last one), please test and report any error or problem you encounter. This release is all about performance and fixes Support: "Or" and "Does Not contain" filter options Improved BooleanSubstitutes, Custom Aggregates and expressions-to-queryover Add EntityFramework examples in ExampleWebApplication Many other improvements and fixes Fix invalid cast on CustomAggregates Support for ...ServiceMon - Extensible Real-time, Service Monitoring Utility: ServiceMon Release 0.9.0.44: Auto-uploaded from build serverJavaScript Grid: Release 09-05-2012: Release 09-05-2012xUnit.net Contrib: xunitcontrib-dotCover 0.6.1 (dotCover 2.1 beta): xunitcontrib release 0.6.1 for dotCover 2.1 beta This release provides a test runner plugin for dotCover 2.1 beta, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release adds support for running xUnit.net tests to dotCover 2.1 beta's Visual Studio plugin. PLEASE NOTE: You do NOT need this if you also have ReSharper and the existing 0.6.1 release installed. DotCover will use ReSharper's test runners if available. This release includes th...B INI Sharp Library: B INI Sharp Library v1.0.0.0 Realsed: The frist realsedActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsA Simple Eng-Hindi CMS: A simple English- Hindi dual language content management system for small business/personal websites.Active Social Migrator: This project for managing the Active Social migration tool.ANSI Console User Control: Custom console control for .NET WinformsAutoSPInstallerGUI: GUI Configuration Tool for SPAutoInstaller Codeplex ProjectCode Documentation Checkin Policy: This checkin policy for Visual Studio 2012 checks if c# code is documented the way it's configured in the config of the policy. Code Dojo/Kata - Free Time Coding: Doing some katas of the Coding Dojo page. http://codingdojo.org/cgi-bin/wiki.pl?KataCataloguefjycUnifyShow: fjycUnifyShowHidden Capture (HC): HC is simple and easy utility to hidden and auto capture desktop or active windowHRC Integration Services: Fake SQL Server Integration Services. LOLKooboo CMS Sites Switcher: Kooboo CMS Sites SwitcherMod.CookieDetector: Orchard module for detecting whether cookies are enabledMyCodes: Created!MySQL Statement Monitor: MySQL Statement Monitor is a monitoring tool that monitors SQL statements transferred over the network.NeoModulusPIRandom: The idea with PI Random is to use easy string manipulation and simple math to generate a pseudo random number. Net Core Tech - Medical Record System: This is a Medical Record System ProjectOraPowerShell: PowerShell library for backup and maintenance of a Oracle Database environment under Microsoft Windows 2008PinDNN: PinDNN is a module that imparts Pinterest-like functionality to DotNetNuke sites. This module works with a MongoDB database and uses the built-in social relatioPyrogen Code Generator: PyroGen is a simple code generator accepting C# as the markup language.restMs: wil be deleted soonScript.NET: Script.NET is a script management utility for web forms and MVC, using ScriptJS-like features to link dependencies between scripts.SpringExample-Pagination: Simple Spring example with PaginationXNA and Component Based Design: This project includes code for XNA and Component Based Design

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • Adding multiple data importers support to web applications

    - by DigiMortal
    I’m building web application for customer and there is requirement that users must be able to import data in different formats. Today we will support XLSX and ODF as import formats and some other formats are waiting. I wanted to be able to add new importers on the fly so I don’t have to deploy web application again when I add new importer or change some existing one. In this posting I will show you how to build generic importers support to your web application. Importer interface All importers we use must have something in common so we can easily detect them. To keep things simple I will use interface here. public interface IMyImporter {     string[] SupportedFileExtensions { get; }     ImportResult Import(Stream fileStream, string fileExtension); } Our interface has the following members: SupportedFileExtensions – string array of file extensions that importer supports. This property helps us find out what import formats are available and which importer to use with given format. Import – method that does the actual importing work. Besides file we give in as stream we also give file extension so importer can decide how to handle the file. It is enough to get started. When building real importers I am sure you will switch over to abstract base class. Importer class Here is sample importer that imports data from Excel and Word documents. Importer class with no implementation details looks like this: public class MyOpenXmlImporter : IMyImporter {     public string[] SupportedFileExtensions     {         get { return new[] { "xlsx", "docx" }; }     }     public ImportResult Import(Stream fileStream, string extension)     {         // ...     } } Finding supported import formats in web application Now we have importers created and it’s time to add them to web application. Usually we have one page or ASP.NET MVC controller where we need importers. To this page or controller we add the following method that uses reflection to find all classes that implement our IMyImporter interface. private static string[] GetImporterFileExtensions() {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;       var extensions = new Collection<string>();     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           foreach (var extension in instance.SupportedFileExtensions)         {             if (extensions.Contains(extension))                 continue;               extensions.Add(extension);         }     }       return extensions.ToArray(); } This code doesn’t look nice and is far from optimal but it works for us now. It is possible to improve performance of web application if we cache extensions and their corresponding types to some static dictionary. We have to fill it only once because our application is restarted when something changes in bin folder. Finding importer by extension When user uploads file we need to detect the extension of file and find the importer that supports given extension. We add another method to our page or controller that uses reflection to return us importer instance or null if extension is not supported. private static IMyImporter GetImporterForExtension(string extensionToFind) {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           if (instance.SupportedFileExtensions.Contains(extensionToFind))         {             return instance;         }     }       return null; } Here is example ASP.NET MVC controller action that accepts uploaded file, finds importer that can handle file and imports data. Again, this is sample code I kept minimal to better illustrate how things work. public ActionResult Import(MyImporterModel model) {     var file = Request.Files[0];     var extension = Path.GetExtension(file.FileName).ToLower();     var importer = GetImporterForExtension(extension.Substring(1));     var result = importer.Import(file.InputStream, extension);     if (result.Errors.Count > 0)     {         foreach (var error in result.Errors)             ModelState.AddModelError("file", error);           return Import();     }     return RedirectToAction("Index"); } Conclusion That’s it. Using couple of ugly methods and one simple interface we were able to add importers support to our web application. Example code here is not perfect but it works. It is possible to cache mappings between file extensions and importer types to some static variable because changing of these mappings means that something is changed in bin folder of web application and web application is restarted in this case anyway.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Using Unity – Part 5

    - by nmarun
    In the previous article of the series, I talked about constructor and property (setter) injection. I wanted to write about how to work with arrays and generics in Unity in this blog, after seeing how lengthy this one got, I’ve decided to write about generics in the next one. This one will only concentrate on arrays. My Product4 class has the following definition: 1: public interface IProduct 2: { 3: string WriteProductDetails(); 4: } 5:  6: public class Product4 : IProduct 7: { 8: public string Name { get; set; } 9: public ILogger[] Loggers { get; set; } 10:  11: public Product4(string productName, ILogger[] loggers) 12: { 13: Name = productName; 14: Loggers = loggers; 15: } 16:  17: public string WriteProductDetails() 18: { 19: StringBuilder productDetails = new StringBuilder(); 20: productDetails.AppendFormat("{0}<br/>", Name); 21: for (int i = 0; i < Loggers.Count(); i++) 22: { 23: productDetails.AppendFormat("{0}<br/>", Loggers[i].WriteLog()); 24: } 25: 26: return productDetails.ToString(); 27: } 28: } The key parts are line 4 where we declare an array of ILogger and line 5 where-in the constructor passes an instance of an array of ILogger objects. I’ve created another class – FakeLogger: 1: public class FakeLogger : ILogger 2: { 3: public string WriteLog() 4: { 5: return string.Format("Type: {0}", GetType()); 6: } 7: } It’s implementation is the same as what we had for the FileLogger class. Coming to the web.config file, first add the following aliases. The alias for FakeLogger should make sense right away. ILoggerArray defines an array of ILogger objects. I’ll tell why we need an alias for System.String data type. 1: <typeAlias alias="string" type="System.String, mscorlib" /> 2: <typeAlias alias="ILoggerArray" type="ProductModel.ILogger[], ProductModel" /> 3: <typeAlias alias="FakeLogger" type="ProductModel.FakeLogger, ProductModel"/> Next is to create mappings for the FileLogger and FakeLogger classes: 1: <type type="ILogger" mapTo="FileLogger" name="logger1"> 2: <lifetime type="singleton" /> 3: </type> 4: <type type="ILogger" mapTo="FakeLogger" name="logger2"> 5: <lifetime type="singleton" /> 6: </type> Finally, for the real deal: 1: <type type="IProduct" mapTo="Product4" name="ArrayProduct"> 2: <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement,Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> 3: <constructor> 4: <param name="productName" parameterType="string" > 5: <value value="Product name from config file" type="string"/> 6: </param> 7: <param name="loggers" parameterType="ILoggerArray"> 8: <array> 9: <dependency name="logger2" /> 10: <dependency name="logger1" /> 11: </array> 12: </param> 13: </constructor> 14: </typeConfig> 15: </type> Here’s where I’m saying, that if a type of IProduct is requested to be resolved, map it to type Product4. Furthermore, the Product4 has two constructor parameters – a string and an array of type ILogger. You might have observed the first parameter of the constructor is named ‘productName’ and that matches the value in the name attribute of the param element. The parameterType of ‘string’ maps to ‘System.String, mscorlib’ and is defined in the type alias above. The set up is similar for the second constructor parameter. The name matches the name of the parameter (loggers) and is of type ILoggerArray, which maps to an array of ILogger objects. We’ve also decided to add two elements to this array when unity resolves it – an instance of FileLogger and one of FakeLogger. The click event of the button does the following: 1: //unityContainer.RegisterType<IProduct, Product4>(); 2: //IProduct product4 = unityContainer.Resolve<IProduct>(); 3: IProduct product4 = unityContainer.Resolve<IProduct>("ArrayConstructor"); 4: productDetailsLabel.Text = product4.WriteProductDetails(); It’s worth mentioning here about the change in the format of resolving the IProduct to create an instance of Product4. You cannot use the regular way (the commented lines) to get an instance of Product4. The reason is due to the behavior of Unity which Alex Ermakov has brilliantly explained here. The corresponding output of the action is: You have a couple of options when it comes to adding dependency elements in the array node. You can: - leave it empty (no dependency elements declared): This will only create an empty array of loggers. This way you can check for non-null condition, in your mock classes. - add multiple dependency elements with the same name 1: <param name="loggers" parameterType="ILoggerArray"> 2: <array> 3: <dependency name="logger2" /> 4: <dependency name="logger2" /> 5: </array> 6: </param> With this you’ll see two instances of FakeLogger in the output. This article shows how Unity allows you to instantiate objects with arrays. Find the code here.

    Read the article

  • Cisco ASA 5505 - L2TP over IPsec

    - by xraminx
    I have followed this document on cisco site to set up the L2TP over IPsec connection. When I try to establish a VPN to ASA 5505 from my Windows XP, after I click on "connect" button, the "Connecting ...." dialog box appears and after a while I get this error message: Error 800: Unable to establish VPN connection. The VPN server may be unreachable, or security parameters may not be configured properly for this connection. ASA version 7.2(4) ASDM version 5.2(4) Windows XP SP3 Windows XP and ASA 5505 are on the same LAN for test purposes. Edit 1: There are two VLANs defined on the cisco device (the standard setup on cisco ASA5505). - port 0 is on VLAN2, outside; - and ports 1 to 7 on VLAN1, inside. I run a cable from my linksys home router (10.50.10.1) to the cisco ASA5505 router on port 0 (outside). Port 0 have IP 192.168.1.1 used internally by cisco and I have also assigned the external IP 10.50.10.206 to port 0 (outside). I run a cable from Windows XP to Cisco router on port 1 (inside). Port 1 is assigned an IP from Cisco router 192.168.1.2. The Windows XP is also connected to my linksys home router via wireless (10.50.10.141). Edit 2: When I try to establish vpn, the Cisco device real time Log viewer shows 7 entries like this: Severity:5 Date:Sep 15 2009 Time: 14:51:29 SyslogID: 713904 Destination IP = 10.50.10.141, Decription: No crypto map bound to interface... dropping pkt Edit 3: This is the setup on the router right now. Result of the command: "show run" : Saved : ASA Version 7.2(4) ! hostname ciscoasa domain-name default.domain.invalid enable password HGFHGFGHFHGHGFHGF encrypted passwd NMMNMNMNMNMNMN encrypted names name 192.168.1.200 WebServer1 name 10.50.10.206 external-ip-address ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address external-ip-address 255.0.0.0 ! interface Vlan3 no nameif security-level 50 no ip address ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name default.domain.invalid object-group service l2tp udp port-object eq 1701 access-list outside_access_in remark Allow incoming tcp/http access-list outside_access_in extended permit tcp any host WebServer1 eq www access-list outside_access_in extended permit udp any any eq 1701 access-list inside_nat0_outbound extended permit ip any 192.168.1.208 255.255.255.240 access-list inside_cryptomap_1 extended permit ip interface outside interface inside pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool PPTP-VPN 192.168.1.210-192.168.1.220 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-524.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www WebServer1 www netmask 255.255.255.255 access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute http server enable http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec transform-set TRANS_ESP_3DES_MD5 esp-3des esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_MD5 mode transport crypto map outside_map 1 match address inside_cryptomap_1 crypto map outside_map 1 set transform-set TRANS_ESP_3DES_MD5 crypto map outside_map interface inside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd enable inside ! group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.1.1 vpn-tunnel-protocol IPSec l2tp-ipsec username myusername password FGHFGHFHGFHGFGFHF nt-encrypted tunnel-group DefaultRAGroup general-attributes address-pool PPTP-VPN default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key * tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! ! prompt hostname context Cryptochecksum:a9331e84064f27e6220a8667bf5076c1 : end

    Read the article

  • Un-failing over a Cisco PIX 515e

    - by ABrown
    We had a power outage at our data center last week and when our dual PIX 515E running IOS 7.0(8) (configured with a failover cable) came back, they were in a failed over state where the Secondary unit is active and the Primary unit is standby I have tried 'failover reset', 'failover active', and 'failover reload-standby' as well as executing reloads on both units in a variety of orders, and they don't come back Primary/Active Secondary/Standby. The only thing in my arsenal that I haven't tried is driving to the data center and performing a hard reboot, which I hate to do. I have read How Failover Works on the Cisco Secure Firewall and it seems like this should be wicked straight forward. output of show failover on Primary: Failover On Cable status: Normal Failover unit Primary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:52:05 UTC Mar 10 2010 This host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Other host: Secondary - Active Active time: 897045 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. output of show failover on Secondary: Failover On Cable status: Normal Failover unit Secondary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:03:04 UTC Feb 28 2010 This host: Secondary - Active Active time: 896925 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Other host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. I'm seeing the following in my syslog: Mar 10 03:05:00 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:05:09 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reload-standby' command. Mar 10 03:05:12 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:05:12 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:06:09 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:06:10 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:06:10 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:06:23 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:06:23 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:06:24 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. Mar 10 03:07:05 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:07:31 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover active' command. Mar 10 03:08:04 fw1 %PIX-5-611103: User logged out: Uname: enable_1 Mar 10 03:08:04 fw1 %PIX-6-315011: SSH session from admin1_int on interface inside for user "pix" terminated normally Mar 10 03:08:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:08:39 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:09:10 fw1 %PIX-6-605005: Login permitted from admin1_int/36891 to inside:192.168.4.4/ssh for user "pix" Mar 10 03:09:23 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:09:38 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:09:52 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:09:52 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:09:53 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. I'm not exactly sure how to interpret that syslog data. Primary doesn't seem to even try to become Active. When I reload the individual units separately, my connections are retained, so it doesn't seem like I have a real hardware failure. Is there something I can query (IOS or SNMP) to check for hardware issues? Any thoughts? My IOS-fu is weak. Thanks for any help you might provide, Aaron

    Read the article

  • debian lenny xen bridge networking problem

    - by Sasha
    DomU isn't talking to the world, but it talks to Dom0. Here are the tests that I made: Dom0 (external networking is working): ping 188.40.96.238 #Which is Domu's ip PING 188.40.96.238 (188.40.96.238) 56(84) bytes of data. 64 bytes from 188.40.96.238: icmp_seq=1 ttl=64 time=0.092 ms DomU: ping 188.40.96.215 #Which is Dom0's ip PING 188.40.96.215 (188.40.96.215) 56(84) bytes of data. 64 bytes from 188.40.96.215: icmp_seq=1 ttl=64 time=0.045 ms ping 188.40.96.193 #Which is the gateway - fail PING 188.40.96.193 (188.40.96.193) 56(84) bytes of data. ^C --- 188.40.96.193 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1013ms The system is debian lenny with a normal setup. Here is my configs: uname -a Linux green0 2.6.26-2-xen-686 #1 SMP Wed Aug 19 08:47:57 UTC 2009 i686 GNU/Linux cat /etc/xen/green1.cfg |grep -v '#' kernel = '/boot/vmlinuz-2.6.26-2-xen-686' ramdisk = '/boot/initrd.img-2.6.26-2-xen-686' memory = '2000' root = '/dev/xvda2 ro' disk = [ 'file:/home/xen/domains/green1/swap.img,xvda1,w', 'file:/home/xen/domains/green1/disk.img,xvda2,w', ] name = 'green1' vif = [ 'ip=188.40.96.238,mac=00:16:3E:1F:C4:CC' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' ifconfig eth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet addr:188.40.96.215 Bcast:188.40.96.255 Mask:255.255.255.192 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3296 errors:0 dropped:0 overruns:0 frame:0 TX packets:2204 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:262717 (256.5 KiB) TX bytes:330465 (322.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) peth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:3407 errors:0 dropped:657431448 overruns:0 frame:0 TX packets:2291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:319941 (312.4 KiB) TX bytes:338423 (330.4 KiB) Interrupt:16 Base address:0x8000 vif2.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:151 errors:0 dropped:33 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:1164 (1.1 KiB) TX bytes:20974 (20.4 KiB) ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: peth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 4: vif0.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 5: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 7: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 9: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 11: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 12: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet 188.40.96.215/26 brd 188.40.96.255 scope global eth0 inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 14: vif2.0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 32 link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever brctl show bridge name bridge id STP enabled interfaces eth0 8000.002421ef2f86 no peth0 vif2.0 ip r l Dom0: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.215 default via 188.40.96.193 dev eth0 DomU: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.238 default via 188.40.96.193 dev eth0

    Read the article

  • howto only tunnel specific hosts route through openvpn client on tomato

    - by kcome
    I am relatively newbie in networking world although I did coding and know some sysadmin background for a long time. and here I'm only one step from my destination. The whole picture is : at home I use one LinkSys E3000 as the gateway(don't know yet if this is it's name), wireless AP and no other routing/switching devices. It serves 1 PC and 1 Mac with LAN, 1 Mac Mini + 1 iPad + 2 smartphones with WIFI. My goal is use an openvpn client on the E3000 (with tomato firmware) and make my iPad and smartphone's all WiFi traffic through it, and other devices route remain the same non-openvpn route. So far I'm able to connect openvpn client on E3000 to an openvpn server, tunnel all my devices' all traffic through that openvpn connection. What's left is howto selectively route by source IP (at least in my guessing) to the tunnel while don't bother others. I had learned some 'iptables' and 'route' in past few days however without much luck, so here comes my question. Here are some info which will help you get the structure. ifconfig -a output, some useless lines striped, and in the web interface C0:C1:C0:1A:E0:28 is WAN, C0:C1:C0:1A:E0:27 is LAN, C0:C1:C0:1A:E0:29 is 2.4G wifi AP, C0:C1:C0:1A:E0:2A is 5G wifi AP. root@router:/tmp/home/root# ifconfig -a br0 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:27 inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:27 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:29 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 eth2 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:2A UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host ppp0 Link encap:Point-to-Point Protocol inet addr:172.200.1.43 P-t-P:172.200.0.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING MULTICAST MTU:1480 Metric:1 vlan1 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:27 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 vlan2 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:28 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 wl0.1 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:29 BROADCAST MULTICAST MTU:1500 Metric:1 brctl show output root@router:/tmp/home/root# brctl show bridge name bridge id STP enabled interfaces br0 8000.c0c1c01ae027 no vlan1 eth1 eth2 before openvpn route-up script root@router:/tmp/home/root# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.200.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 172.200.0.1 0.0.0.0 UG 0 0 0 ppp0 openvpn server push PUSH: Received control message: 'PUSH_REPLY,redirect-gateway,dhcp-option DNS 8.8.8.8,route 172.20.0.1,topology net30,ping 10,ping-restart 120,ifconfig 172.20.0.6 172.20.0.5' openvpn's stock route-up script Apr 24 14:52:06 router daemon.notice openvpn[1768]: /sbin/ifconfig tun11 172.20.0.6 pointopoint 172.20.0.5 mtu 1500 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 72.14.177.29 netmask 255.255.255.255 gw 172.200.0.1 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 172.20.0.5 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 172.20.0.5 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 172.20.0.1 netmask 255.255.255.255 gw 172.20.0.5 route after openvpn root@router:/tmp/home/root# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.20.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun11 72.14.177.29 172.200.0.1 255.255.255.255 UGH 0 0 0 ppp0 172.200.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 172.20.0.1 172.20.0.5 255.255.255.255 UGH 0 0 0 tun11 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 172.20.0.5 128.0.0.0 UG 0 0 0 tun11 128.0.0.0 172.20.0.5 128.0.0.0 UG 0 0 0 tun11 0.0.0.0 172.200.0.1 0.0.0.0 UG 0 0 0 ppp0 something I had noticed and tried: * on the web interface of openvpn client there is an option "Create NAT on tunnel", if i check this, there is the following script (probably executed after openvpn connection established) root@router:/tmp/home/root# cat /tmp/etc/openvpn/fw/client1-fw.sh #!/bin/sh iptables -I INPUT -i tun11 -j ACCEPT iptables -I FORWARD -i tun11 -j ACCEPT iptables -t nat -I POSTROUTING -s 192.168.1.0/255.255.255.0 -o tun11 -j MASQUERADE if i uncheck this option, the last line will not appear. Then I guess probably the my issue will be solved by iptables and NAT related commands, I just haven't got enough knowledge to figure them out. I tried run iptables -t nat -I POSTROUTING -s 192.168.1.6 -o tun11 -j MASQUERADE manually after openvpn connected (192.168.1.6 is the ip address of my iPad), then my iPad get internet with openvpn tunnel, however all other devices can't reach internet. in case if needed, here is the iptables about NAT root@router:/tmp/home/root# iptables -t nat -L -n Chain PREROUTING (policy ACCEPT) target prot opt source destination DROP all -- 0.0.0.0/0 192.168.1.0/24 WANPREROUTING all -- 0.0.0.0/0 172.200.1.43 upnp all -- 0.0.0.0/0 172.200.1.43 Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 SNAT all -- 192.168.1.0/24 192.168.1.0/24 to:192.168.1.1 Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain WANPREROUTING (1 references) target prot opt source destination DNAT icmp -- 0.0.0.0/0 0.0.0.0/0 to:192.168.1.1 Chain upnp (1 references) target prot opt source destination DNAT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5353 to:192.168.1.3:5353 Thanks in advance for helping and read this so much, I hope i made every info you need to give a help :)

    Read the article

  • Poor home office network performance and cannot figure out where the issue is

    - by Jeff Willener
    This is the most bizarre issue. I have worked with small to mid size networks for quite a long time and can say I'm comfortable connecting hardware. Where you will start to lose me is with managed switches and firewalls. To start, let me describe my network (sigh, shouldn't but I MUST solve this). 1) Comcast Cable Internet 2) Motorola SURFboard eXtreme Cable Modem. a) Model: SB6120 b) DOCSIS 3.0 and 2.0 support c) IPv4 and IPv6 support 3-A) Cisco Small Business RV220W Wireless N Firewall a) Latest firmware b) Model: RV220W-A-K9-NA c) WAN Port to Modem (2) d) vlan 1: work e) vlan 2: everything else. 3-B) D-Link DIR-615 Draft 802.11 N Wireless Router a) Latest firmware b) WAN Port to Modem (2) 4) Servers connected directly to firewall a) If firewall 3-A, then vlan 1 b) CAT5e patch cables c) Dell PowerEdge 1400SC w/ 10/100 integrated NIC (Domain Controller, DNS, former DHCP) d) Dell PowerEdge 400SC w/ 10/100/1000 integrated NIC (VMWare Server) 4) Linksys EZXS88W unmanaged Workgroup 10/100 Switch a) If firewall 3-A, then vlan 2 b) 25' CAT5e patch cable to firewall (3-A or 3-B) c) Connects xBox 360, Blu-Ray player, PC at TV 5) Office equipment connected directly to firewall a) If firewall 3-A, then vlan 1 b) ~80' CAT6 or CAT5e patch cable to firewall (3-A or 3-B) c) Connects 1) Dell Latitude laptop 10/100/1000 2) Dell Inspiron laptop 10/100 3) Dell Workstation 10/100/1000 (Pristine host, VMWare Workstation 7.x with many bridged VM's) 4) Brother Laser Printer 10/100 5) Epson All-In-One Workforce 310 10/100 5-A) NetGear FS116 unmanaged 10/100 switch a) I've had this switch for a long time and never had issues. 5-B) NetGear GS108 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-C) Linksys SE2500 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-D) TP-Link TL-SG10008D unmanaged 10/100/1000 a) Bought new for this issue and still have. 6) VLan 1 Wireless Connections (on same subnet if 3-B) a) Any of those at 5c b) HP Laptop 7) VLan 2 Wireless Connection (on same subnet if 3-B) a) IPad, IPod b) Compaq Laptop c) Epson Wireless Printer Shew, without hosting a diagram I hope that paints a good picture. The Issue The breakdown here is at item 5. No matter what I do I cannot have a switch at 5 and have to run everything wireless regardless of router. Issues related to using a switch (point 5 above) SpeedTest is good. Poor throughput to other devices if can communicate at all. Usually cannot ping other devices even on the same switch although, when able, ping times are good. Eventual lose of connectivity and can "sometimes" be restored by unplugging everything for several days, not minutes or hours but we're talking a week if at all. Directly connect to computer gives good internet connection however throughput to other devices connected to firewall is at best horrible. Yet printing doesn't seem to be an issue as long as they are connected via wireless. I have to force the RV220W to 1000Mb on the respective port if using a Gig Switch Issues related to using wireless in place of a switch (point 5 above) Poor throughput to other devices if can communicate. SpeedTest is good. Bottom line Internet speeds are awesome. By the way, Comcast went WAY above and beyond to make sure it was not them. They rewired EVERYTHING which did solve internet drops. Computer to computer connections are garbage Cannot get switch at 5 to work, yet other at 4 has never had an issue. Direct connection, bypass switch, is good for DHCP and internet. DNS must be on server, not firewall. Cisco insists its my switches but as you can see I have used four and two different cables with the same result. My gut feeling is something is happening with routing. But I'm not smart enough to know that answer. I run a lot of VM's at 5-c-3, could that cause it? What's different compared to my previous house is I have introduced Gigabit hardware (firewall/switches/computers). Some of my computers might have IPv6 turned on if I haven't turned it off already. I'm truly at a loss and hope anyone has some crazy idea how to solve this. Bottom line, I need a switch in my office behind the firewall. I've changed everything. The real crux is I will find a working solution and, again, after days it will stop working. So this means I cannot isolate if its a computer since I have to use them. Oh and a solution is not throwing more money at this. I'm well into $1k already. Yah, lame.

    Read the article

  • Segmentation Fault (11) with modwsgi on CentOS 5.7 when running pyramid app

    - by carbotex
    I'm getting Segmentation fault error when trying to access the "Hello World" pyramid app. This error only occurs when running against CentOS 5.7 setup, but no problem whatsoever when tested against OSX and Arch Linux. Could it be a CentOS specific issue? [error] [client 10.211.55.2] Premature end of script headers: pyramid.wsgi [notice] child pid 31212 exit signal Segmentation fault (11) I have tried to follow the troubleshooting guides posted here http://code.google.com/p/modwsgi/wiki/InstallationIssues which suggests that it might caused by missing Shared Library. A quick check reveals that shared library is not the issue. [centos57@localhost modules]$ ldd mod_wsgi.so linux-gate.so.1 => (0x00e6a000) libpython2.7.so.1.0 => /home/python/lib/libpython2.7.so.1.0 (0x0024c000) libpthread.so.0 => /lib/libpthread.so.0 (0x00da8000) libdl.so.2 => /lib/libdl.so.2 (0x00cd6000) libutil.so.1 => /lib/libutil.so.1 (0x00110000) libm.so.6 => /lib/libm.so.6 (0x0085c000) libc.so.6 => /lib/libc.so.6 (0x00682000) /lib/ld-linux.so.2 (0x0012b000) Then I found another clue that might be able to solve my problem. Unfortunately libexpat is not the source of the problem. http://code.google.com/p/modwsgi/wiki/IssuesWithExpatLibrary [centos57@localhost bin]$ ldd ~/httpd/bin/httpd | grep expat libexpat.so.1 => /usr/local/lib/libexpat.so.1 (0x00b00000) [centos57@localhost bin]$ strings /usr/local/lib/libexpat.so.1 | grep expat libexpat.so.1 expat_2.0.1 [centos57@localhost bin]$ python Python 2.7.2 (default, Nov 26 2011, 08:08:44) [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pyexpat >>> pyexpat.version_info (2, 0, 0) >>> I've been pulling my hair out trying to figure out what I'm missing in my setup. Why the problem only occurs with CentOS? Here is the detailed setup: Apache 2.2.19 Python 2.7.2 mod_wsgi-3.3 /home/httpd/conf/extra/pyramid.wsgi from pyramid.paster import get_app application = get_app('/home/homecamera/hcadmin/root/production.ini', 'main') /home/httpd/conf/extra/modwsgi.conf LoadModule wsgi_module modules/mod_wsgi.so WSGIScriptAlias /myapp /home/root/test.wsgi <Directory /home/root> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> # Use only 1 Python sub-interpreter. Multiple sub-interpreters # play badly with C extensions. WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess pyramid user=daemon group=daemon processes=1 \ threads=4 \ python-path=/home/python/lib/python2.7/site-packages WSGIScriptAlias /hello /home/httpd/conf/extra/pyramid.wsgi <Directory /home/httpd/conf/extra> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> Again this same setup works on OSX and Arch Linux but not on CentOS 5.7. Could someone out there point me to the right direction before I ran out of my hair. ==================================================================================== When apache started with gdb, I got a couple of warnings Reading symbols from /home/httpd/bin/httpd...done. Attaching to program: /home/httpd/bin/httpd, process 1821 warning: .dynamic section for "/lib/libcrypt.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libutil.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations gdb output. After hitting refresh button, to load pyramid. (gdb) cont Continuing. warning: .dynamic section for "/usr/lib/libgssapi_krb5.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/usr/lib/libkrb5.so.3" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libresolv.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x8edbb90 (LWP 1824)] 0x0814c120 in EVP_PKEY_CTX_dup () apache_error_log [info] mod_wsgi (pid=1821): Starting process 'pyramid' with threads=1. [info] mod_wsgi (pid=1821): Initializing Python. [info] mod_wsgi (pid=1821): Attach interpreter ''. [info] mod_wsgi (pid=1821): Create interpreter 'web.domain.com:20000|/hcadmin'. [info] [client 10.211.55.2] mod_wsgi (pid=1821, process='pyramid', application='web.domain.com:20000|/hcadmin'): Loading WSGI script '/home/httpd/conf/extra/pyramid.wsgi'. [error] hello 1

    Read the article

  • Three ways to upload/post/convert iMovie to YouTube

    - by user44251
    For Mac users, iMovie is probably a convenient tool for making, editing their own home movies so as to upload to YouTube for sharing with more people. However, uploading iMovie files to YouTube can't be always a smooth run, I did notice many people complaining about it. This article is delivered for guiding those who are haunted by the nightmare by providing three common ways to upload iMovie files to YouTube. YouTube and iMovie YouTube is the most popular video sharing website for users to upload, share and view videos. It empowers anyone with an Internet connection the ability to upload video clips and share them with friends, family and the world. Users are invited to leave comments, pick favourites, send messages to each other and watch videos sorted into subjects and channels. YouTube accepts videos uploaded in most container formats, including WMV (Windows Media Video), 3GP (Cell Phones), AVI (Windows), MOV (Mac), MP4 (iPod/PSP), FLV (Adobe Flash), MKV (H.264). These include video codecs such as MP4, MPEG and WMV. iMovie is a common video editing software application comes with every Mac for users to edit their own home movies. It imports video footage to the Mac using either the Firewire interface on most MiniDV format digital video cameras, the USB port, or by importing the files from a hard drive where users can edit the video clips, add titles, and add music. Since 1999, eight versions of iMovie have been released by Apple, each with its own functions and characteristic, and each of them deal with videos in a way more or less different. But the most common formats handled with iMovie if specialty discarded as far as to my research are MOV, DV, HDV, MPEG-4. Three ways for successful upload iMovie files to YouTube Solution one and solution two suitable for those who are 100 certainty with their iMovie files which are fully compatible with YouTube. For smooth uploading, you are required to get a YouTube account first. Solution 1: Directly upload iMovie to YouTube Step 1: Launch iMovie, select the project you want to upload in YouTube. Step 2: Go to the file menu, click Share, select Export Movie Step 3: Specify the output file name and directory and then type the video type and video size. Solution 2: Post iMovie to YouTube straightly Step 1: Launch iMovie, choose the project you want to post in YouTube Step 2: From the Share menu, choose YouTube Step 3: In the pop-up YouTube windows, specify the name of your YouTube account, the password, choose the Category and fill in the description and tags of the project. Tick Make this movie more private on the bottom of the window, if possible, to limit those who can view the project. Click Next, and then click Publish. iMovie will automatically export and upload the movie to YouTube. Step 4: Click Tell a Friend to email friends and your family about your film. You are also allowed to copy the URL from Tell a Friend window and paste it into an email you created in your favourite email application if you like. Anyone you send to email to will be able to follow the URL directly to your movie. Note: Videos uploaded to YouTube are limited to ten minutes in length and a file size of 2GB. Solution 3: Upload to iMovie after conversion If neither of the above mentioned method works, there is still a third way to turn to. Sometimes, your iMovie files may not be recognized by YouTube due to the versions of iMovie (settings and functions may varies among versions), video itself (video format difference because of file extension, resolution, video size and length), compatibility (videos that are completely incompatible with YouTube). In this circumstance, the best and reliable method is to convert your iMovie files to YouTube accepted files, iMovie to YouTube converter will be inevitably the ideal choice. iMovie to YouTube converter is an elaborately designed tool for convert iMovie files to YouTube workable WMV, 3GP, AVI, MOV, MP4, FLV, MKV for smooth uploading with hard-to-believe conversion speed and second to none output quality. It can also convert between almost all popular popular file formats like AVI, WMV, MPG, MOV, VOB, DV, MP4, FLV, 3GP, RM, ASF, SWF, MP3, AAC, AC3, AIFF, AMR, WAV, WMA etc so as to put on various portable devices, import to video editing software or play on vast amount video players. iMovie to YouTube converter can also served as an excellent video editing tool to meet your specific program requirements. For example, you can cut your video files to a certain length, or split your video files to smaller ones and select the proper resolution suitable for demands of YouTube by Clip or Settings separately. Crop allows you to cut off unwanted black edges from your videos. Besides, you can also have a good command of the whole process or snapshot your favourite pictures from the preview window. More can be expected if you have a try.

    Read the article

  • OpenIndiana (illumos): vmxnet3 interface lost on reboot

    - by protomouse
    I want my VMware vmxnet3 interface to be brought up with DHCP on boot. I can manually configure the NIC with: # ifconfig vmxnet3s0 plumb # ipadm create-addr -T dhcp vmxnet3s0/v4dhcp But after creating /etc/dhcp.vmxnet3s0 and rebooting, the interface is down and the logs show: Aug 13 09:34:15 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:15 neumann vmxnet3s: [ID 715698 kern.notice] vmxnet3s:0: stop() Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:17 neumann vmxnet3s: [ID 920500 kern.notice] vmxnet3s:0: start() Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(TxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxBufPoolLimit) -> 512 Aug 13 09:34:17 neumann nwamd[491]: [ID 605049 daemon.error] 1: nwamd_set_unset_link_properties: dladm_set_linkprop failed: operation not supported Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x20000) -> no Aug 13 09:34:17 neumann nwamd[491]: [ID 751932 daemon.error] 1: nwamd_down_interface: ipadm_delete_addr failed on vmxnet3s0: Object not found Aug 13 09:34:17 neumann nwamd[491]: [ID 819019 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv4 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 160156 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv6 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 771489 daemon.error] 1: add_ip_address: ipadm_create_addr failed on vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 405346 daemon.error] 9: start_dhcp: ipadm_create_addr failed for vmxnet3s0: Operation not supported on disabled object I then tried disabling network/physical:nwam in favour of network/physical:default. This works, the interface is brought up but physical:default fails and my network services (e.g. NFS) refuse to start. # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vmxnet3s0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:1: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:2: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:3: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:4: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:5: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:6: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:7: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:8: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 vmxnet3s0: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 9000 index 2 inet6 ::/0 # cat /var/svc/log/network-physical\:default.log [ Aug 16 09:46:39 Enabled. ] [ Aug 16 09:46:41 Executing start method ("/lib/svc/method/net-physical"). ] [ Aug 16 09:46:41 Timeout override by svc.startd. Using infinite timeout. ] starting DHCP on primary interface vmxnet3s0 ifconfig: vmxnet3s0: DHCP is already running [ Aug 16 09:46:43 Method "start" exited with status 96. ] NFS server not running: # svcs -xv network/nfs/server svc:/network/nfs/server:default (NFS server) State: offline since August 16, 2012 09:46:40 AM UTC Reason: Service svc:/network/physical:default is not running because a method failed. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:default Reason: Service svc:/network/physical:nwam is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:nwam Reason: Service svc:/network/nfs/nlockmgr:default is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/network/nfs/nlockmgr:default See: man -M /usr/share/man -s 1M nfsd Impact: This service is not running. I'm new to the world of Solaris, so any help solving would be much appreciated. Thanks!

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >