Search Results

Search found 2592 results on 104 pages for 'backbone routing'.

Page 91/104 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

  • Adopting Technologies for the Sake of Technologies

    - by shiju
    Unlike other engineering industries, the software engineering industry is really lacking maturity. The lack of maturity can see in different aspects of entire software development life cycle. I think other engineering industries are well organised and structured with common, proven engineering practices. The software engineering industry is greatly a diverse industry with different operating systems, and variety of development platforms, programming languages, frameworks and tools. Now these days, people are going behind the hypes and intellectual thoughts without understanding their core business problems and adopting technologies and practices for the sake of technologies and practices and simply becoming a “poster child” of technologies and practices. Understanding the core business problem and providing best, solid solution with a platform neutral approach, will give you more business values and ROI, instead of blindly adopting technologies and tailor-made your applications for the sake of technologies and practices. People have been simply migrating their solutions in favour of new technologies and different versions of frameworks without any business need. The “Pepsi Challenge” in the Software Development  Pepsi Challenge marketing campaign of the 1980s was a popular and very interesting marketing promotion in which people taste one cup of Pepsi and another cup with Coca Cola. In the taste test, more than 50% of people were preferred Pepsi  over Coca Cola. The success story behind the Pepsi was more sweetness contains in the Pepsi cola. They have simply added more sugar and more people preferred more sweet flavour. You can’t simply identify the better one after sipping one cup of cola based on the sweetness which contains. These things have been happening in the software industry for choosing development frameworks and technologies. People have been simply choosing frameworks based on the initial sugary feeling without understanding its core strengths and weakness. The sugary framework might be more harmful when you develop real-world systems. There is not any silver bullet for solving all kind of problems and frameworks and tools do have strengths and weakness. So it would be better to understand their strength and weakness. And please keep in mind that you have to develop real apps to understand the real capabilities and weakness of a framework. Evaluating a technology based on few blog posts will harm your projects and these bloggers might be lacking real-world experience with the framework. The Problem with Align a Development Practice with Tools Recently I have observed a discussion in a group where one guy asked suggestions for practicing Continuous Delivery (CD) as part of the agile based application engineering. Then the discussion quickly went to using and choosing a Continuous Integration (CI) tool and different people suggested different Continuous Integration (CI) tools for simply practicing Continuous Delivery. If you have worked with core agile engineering practices, you could clearly know that the real essence of agile is neither choosing a tool nor choosing a process. By simply choosing CI tool from a particular vendor will not ensure that you are delivering an evolving software based on customer feedback. You have to understand the real essence of a engineering practice and choose a right tool for practicing it instead of simply focus on a particular tool for a practicing an development practice. If you want to adopt a practice, you need a solid understanding on it with its real essence where tools are just helping us for better automation. Adopting New Technologies for the Sake of Technologies The another problem is that developers have been a tendency to adopt new technologies and simply migrating their existing apps to new technologies. It is okay if your existing system is having problem  with a technology stack or or maintainability challenge with existing solution, and moving to new technology for solving the current problems. We have been adopting new technologies for solving new challenges like solving the scalability challenges when the application or user bases is growing unpredictably. Please keep in mind that all new technologies will become old after working with it for few years. The below Facebook status update of Janakiraman, expresses the attitude of a typical customer. For an example, Node.js is becoming a hottest buzzword in the software industry and many developers are trying to adopt Node.js for their apps. The important thing is that Node.js is a minimalist framework that does some great things for some problems, but it’s not a silver bullet. I have been also working with Node.js which is good for some problems, but really bad for choosing it for all kind of problems. By adopting new technologies for new projects is good if we could get real business values from it because newer framework would solve some existing well known problems and provide better solutions where it can incorporate good solutions for the latest challenges . But adopting a new technology for the sake of new technology is really bad idea. Another example is JavaScript is getting lot of attention so that lot of developers are developing heavy JavaScript centric web apps. First, they will adopt a client-side JavaScript MV* framework from AngularJS, Ember, Backbone etc, and develop a Single Page App(SPA) where they are repeating the mistakes we did in the past with server-side. The mistakes we did in the server-side is transforming to client-side. The problem is that people are just adopting new technologies, but not improving their solutions. I predict that many Single Page App will suck in the future. We need a hybrid approach where we should be able to leverage both server-side and client-side for developing next-generation web apps. The another problem is that if you like a particular framework, use it for all kind of apps. In the past, I know some Silverlight passionate guys were tried to use that framework for all kind of apps including larger line of business apps. And these days developers are migrating their existing Silverlight apps in favour of HTML5 buzzword. So the real question is, what is the business values we are getting from these apps when we are developing it for the sake of a particular technology instead of business need. The another problem is that our solutions consultants are trying to provide unnecessary solutions for the sake of a particular technology or for a hype. For an example, Big Data solutions are great for solving the problem of three Vs : volume, velocity and variety. But trying to put this for every application will make problems. Let’s say, there is a small web site running with limited budget and saying that we need a recommendation engine for the web site with a Hadoop based solution with a 16 node cluster, would be really horrible. If we really need a Hadoop based solution, got for it, but trying to put this for all application would be a big disaster. It would be great if could understand the core business problems first, and later choose a right framework for providing solutions for the actual business problem, instead of trying to provide so many solutions. The Problem with Tied Up to a Platform Vendor Some organizations and teams are tied up with a particular platform vendor where they don’t want to use any product other than their preferred or existing platform vendor. They will accept any product provided by the vendor regardless of its capability. This will lets you some benefits regards with integration and collaboration of different products provided by the same vendor, but it will loose your opportunity to provide better solution for your business problems. For a real world sample scenario, lot of companies have been using SAP for their ERP solutions. When they are thinking about mobility or thinking about developing hybrid mobile apps, they can easily find out a framework from SAP. SAP provides a framework for HTML 5 based UI development named SAPUI5. If you are simply adopting that framework only based for the preference of existing platform vendor, you might be loose different opportunities for providing better solution. Initially you might enjoy the sugary feeling provided by the platform vendor, but you have to think about developing apps which should be capable for solving future challenges. I am not saying that any framework is not good and I believe that all frameworks are good over another one for solving at least one problem. My point is that we should not tied up with any specific platform vendor unless your organization is having resource availability problems. Being Polyglot for Providing Right Solutions The modern software engineering industry is greatly diverse with different tools and platforms. Lot of open source frameworks and new programming languages have been releasing to the developer community, where choosing the right platform without any biased opinion, is really a difficult task. But it would really great if we could develop an attitude with platform neutral mindset and being a polyglot developer for providing better solutions based on the actual business problems. IMHO, we should learn a new programming language and a new framework every year. This will improve the quality of our developer capabilities and also improve the quality of our primary programming language skills. Being polyglot for individual developers and organizational teams will give you greater opportunity to your developer experience and also for your applications. Organizations can analyse their business problem without tied with any technology and later they can provide solutions by choosing different platform and tools. Summary    In this blog post, what I was trying to say that we should not tied up or biased with any development platform, technology, vendor or programming language and we should not adopt technologies and practices for the sake of technologies. If we are adopting a technology or a practice for the sake of it, we are simply becoming a “poster child” of the technology and practice. We should not become a poster child of other people’s intellectual thoughts and theories, instead of it we should become solutions developers and solutions consultants where we should be able to provide better solutions for the business problems. Being a polyglot developer is a good idea for improving your developer skills which lets you provide better solutions for the business problems. The most important thing is that we should become platform neutral developers where our passion should be for providing brilliant solutions. It would be great if we could provide minimalist, pragmatic business solutions. You can follow me on Twitter @shijucv

    Read the article

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • Linux router: ping doesn't route back

    - by El Barto
    I have a Debian box which I'm trying to set up as a router and an Ubuntu box which I'm using as a client. My problem is that when the Ubuntu client tries to ping a server on the Internet, all the packets are lost (though, as you can see below, they seem to go to the server and back without problem). I'm doing this in the Ubuntu Box: # ping -I eth1 my.remote-server.com PING my.remote-server.com (X.X.X.X) from 10.1.1.12 eth1: 56(84) bytes of data. ^C --- my.remote-server.com ping statistics --- 13 packets transmitted, 0 received, 100% packet loss, time 12094ms (I changed the name and IP of the remote server for privacy). From the Debian Router I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 7, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 8, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 8, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 9, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 9, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 10, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 10, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 11, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 11, length 64 ^C 9 packets captured 9 packets received by filter 0 packets dropped by kernel # tcpdump -i eth2 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 213, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 213, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 214, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 214, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 215, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 215, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 216, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 216, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 217, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 217, length 64 ^C 10 packets captured 10 packets received by filter 0 packets dropped by kernel And at the remote server I see this: # tcpdump -i eth0 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 1, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 1, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 2, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 2, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 3, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 3, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 4, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 4, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 5, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 5, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 6, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 6, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 7, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 7, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 8, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 8, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 9, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 9, length 64 18 packets captured 228 packets received by filter 92 packets dropped by kernel Here "X.X.X.X" is my remote server's IP and "Y.Y.Y.Y" is my local network's public IP. So, what I understand is that the ping packets are coming out of the Ubuntu box (10.1.1.12), to the router (10.1.1.1), from there to the next router (192.168.1.1) and reaching the remote server (X.X.X.X). Then they come back all the way to the Debian router, but they never reach the Ubuntu box back. What am I missing? Here's the Debian router setup: # ifconfig eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:105761 errors:0 dropped:0 overruns:0 frame:0 TX packets:48944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:40298768 (38.4 MiB) TX bytes:44831595 (42.7 MiB) Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38335992 errors:0 dropped:0 overruns:0 frame:0 TX packets:37097705 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:4260680226 (3.9 GiB) TX bytes:3759806551 (3.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3408 errors:0 dropped:0 overruns:0 frame:0 TX packets:3408 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:358445 (350.0 KiB) TX bytes:358445 (350.0 KiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:2767779 errors:0 dropped:0 overruns:0 frame:0 TX packets:1569477 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3609469393 (3.3 GiB) TX bytes:96113978 (91.6 MiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Ubuntu box (on 10.1.1.12 and 192.168.1.12) Address HWtype HWaddress Flags Mask Iface 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.102 ether NN:NN:NN:NN:NN:NN C eth2 10.1.1.12 ether 00:1e:67:15:2b:f0 C eth1 192.168.1.86 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.2 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.40 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.12 ether 00:1e:67:15:2b:f1 C eth2 192.168.1.77 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.41 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.123 ether NN:NN:NN:NN:NN:NN C eth2 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 10.1.1.0/24 !10.1.1.0/24 MASQUERADE all -- !10.1.1.0/24 10.1.1.0/24 Chain OUTPUT (policy ACCEPT) target prot opt source destination And here's the Ubuntu box: # ifconfig eth0 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f1 inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:28785139 errors:0 dropped:0 overruns:0 frame:0 TX packets:19050735 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32068182803 (32.0 GB) TX bytes:6061333280 (6.0 GB) Interrupt:16 Memory:b1a00000-b1a20000 eth1 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f0 inet addr:10.1.1.12 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:285086 errors:0 dropped:0 overruns:0 frame:0 TX packets:12719 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30817249 (30.8 MB) TX bytes:2153228 (2.1 MB) Interrupt:16 Memory:b1900000-b1920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:86048 errors:0 dropped:0 overruns:0 frame:0 TX packets:86048 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11426538 (11.4 MB) TX bytes:11426538 (11.4 MB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.1.1 0.0.0.0 UG 100 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.8.0.0 192.168.1.10 255.255.255.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Debian box (on 10.1.1.1 and 192.168.1.10) Address HWtype HWaddress Flags Mask Iface 192.168.1.70 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.97 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.103 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.13 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.120 (incomplete) eth0 192.168.1.111 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.51 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.102 (incomplete) eth0 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.74 (incomplete) eth0 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.121 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.71 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.88 (incomplete) eth0 192.168.1.82 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.98 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.73 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.11 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.85 (incomplete) eth0 192.168.1.112 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.81 ether NN:NN:NN:NN:NN:NN C eth0 10.1.1.1 ether 94:0c:6d:82:0d:98 C eth1 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.10 ether 6c:f0:49:a4:47:38 C eth0 192.168.1.86 (incomplete) eth0 192.168.1.119 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth1 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth0 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination Edit: Following Patrick's suggestion, I did a tcpdump con the Ubuntu box and I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 1, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 1, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 2, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 2, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 3, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 3, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 4, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 4, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 5, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 5, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 6, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 6, length 64 ^C 12 packets captured 12 packets received by filter 0 packets dropped by kernel So the question is: if all packets seem to be coming and going, why does ping report 100% packet loss?

    Read the article

  • Removing the XML Formatter from ASP.NET Web API Applications

    - by Rick Strahl
    ASP.NET Web API's default output format is supposed to be JSON, but when I access my Web APIs using the browser address bar I'm always seeing an XML result instead. When working on AJAX application I like to test many of my AJAX APIs with the browser while working on them. While I can't debug all requests this way, GET requests are easy to test in the browser especially if you have JSON viewing options set up in your various browsers. If I preview a Web API request in most browsers I get an XML response like this: Why is that? Web API checks the HTTP Accept headers of a request to determine what type of output it should return by looking for content typed that it has formatters registered for. This automatic negotiation is one of the great features of Web API because it makes it easy and transparent to request different kinds of output from the server. In the case of browsers it turns out that most send Accept headers that look like this (Chrome in this case): Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Web API inspects the entire list of headers from left to right (plus the quality/priority flag q=) and tries to find a media type that matches its list of supported media types in the list of formatters registered. In this case it matches application/xml to the Xml formatter and so that's what gets returned and displayed. To verify that Web API indeed defaults to JSON output by default you can open the request in Fiddler and pop it into the Request Composer, remove the application/xml header and see that the output returned comes back in JSON instead. An accept header like this: Accept: text/html,application/xhtml+xml,*/*;q=0.9 or leaving the Accept header out altogether should give you a JSON response. Interestingly enough Internet Explorer 9 also displays JSON because it doesn't include an application/xml Accept header: Accept: text/html, application/xhtml+xml, */* which for once actually seems more sensible. Removing the XML Formatter We can't easily change the browser Accept headers (actually you can by delving into the config but it's a bit of a hassle), so can we change the behavior on the server? When working on AJAX applications I tend to not be interested in XML results and I always want to see JSON results at least during development. Web API uses a collection of formatters and you can go through this list and remove the ones you don't want to use - in this case the XmlMediaTypeFormatter. To do this you can work with the HttpConfiguration object and the static GlobalConfiguration object used to configure it: protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // remove default Xml handler var matches = config.Formatters .Where(f = f.SupportedMediaTypes .Where(m = m.MediaType.ToString() == "application/xml" || m.MediaType.ToString() == "text/xml") .Count() 0) .ToList() ; foreach (var match in matches) config.Formatters.Remove(match); } } That LINQ code is quite a mouthful of nested collections, but it does the trick to remove the formatter based on the content type. You can also look for the specific formatter (XmlMediatTypeFormatter) by its type name which is simpler, but it's better to search for the supported types as this will work even if there are other custom formatters added. Once removed, now the browser request results in a JSON response: It's a simple solution to a small debugging task that's made my life easier. Maybe you find it useful too…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • The Mysterious ARR Server Farm to URL Rewrite link

    - by OWScott
    Application Request Routing (ARR) is a reverse proxy plug-in for IIS7+ that does many things, including functioning as a load balancer.  For this post, I’m assuming that you already have an understanding of ARR.  Today I wanted to find out how the mysterious link between ARR and URL Rewrite is maintained.  Let me explain… ARR is unique in that it doesn’t work by itself.  It sits on top of IIS7 and uses URL Rewrite.  As a result, ARR depends on URL Rewrite to ‘catch’ the traffic and redirect it to an ARR Server Farm. As the last step of creating a new Server Farm, ARR will prompt you with the following: If you accept the prompt, it will create a URL Rewrite rule for you.  If you say ‘No’, then you’re on your own to create a URL Rewrite rule. When you say ‘Yes’, the Server Farm’s checkbox for “Use URL Rewrite to inspect incoming requests” will be checked.  See the following screenshot. However, I’m not a fan of this auto-rule.  The problem is that if I make any changes to the URL Rewrite rule, which I always do, and then make the wrong change in ARR, it will blow away my settings.  So, I prefer to create my own rule and manage it myself. Since I had some old rules that were managed by ARR, I wanted to update them so that they were no longer managed that way.  I took a look at a config in applicationHost.config to try to find out what property would bind the two together.  I assumed that there would be a property on the ServerFarm called something like urlRewriteRuleName that would serve as the link between ARR and URL Rewrite.  I found no such property.  After a bit of testing, I found that the name of the URL Rewrite rule is the only link between ARR and URL Rewrite.  I wouldn’t have guessed.  The URL Rewrite rule needs to be exactly ARR_{ServerFarm Name}_loadBalance, although it’s not case sensitive. Consider the following auto-created URL Rewrite rule: And, the link between ARR and URL Rewrite exists: Now, as soon as I rename that to anything else, for example, site.com ARR Binding, the link between ARR and URL Rewrite is broken. To be certain of the relationship, I renamed it back again and sure enough, the relationship was reestablished. Why is this important?  It’s only important if you want to decouple the relationship between ARR the URL Rewrite rule, but if you want to do so, the best way to do that is to rename the URL Rewrite rule.  If you uncheck the “Use URL Rewrite to inspect incoming requests” checkbox, it will delete your rule for you without prompting.  Conclusion The mysterious link between ARR and URL Rewrite only exists through the ARR Rule name.  If you want to break the link, simply rename the URL Rewrite rule.  It’s completely safe to do so, and, in my opinion, this is a rule that you should manage yourself anyway. 

    Read the article

  • List of blogs - year 2010

    - by hajan
    This is the last day of year 2010 and I would like to add links to all blogs I have posted in this year. First, I would like to mention that I started blogging in ASP.NET Community in May / June 2010 and have really enjoyed writing for my favorite technologies, such as: ASP.NET, jQuery/JavaScript, C#, LINQ, Web Services etc. I also had great feedback either through comments on my blogs or in Twitter, Facebook, LinkedIn where I met many new experts just as a result of my blog posts. Thanks to the interesting topics I have in my blog, I became DZone MVB. Here is the list of blogs I made in 2010 in my ASP.NET Community Weblog: (newest to oldest) Great library of ASP.NET videos – Pluralsight! NDepend – Code Query Language (CQL) NDepend tool – Why every developer working with Visual Studio.NET must try it! jQuery Templates in ASP.NET - Blogs Series jQuery Templates - XHTML Validation jQuery Templates with ASP.NET MVC jQuery Templates - {Supported Tags} jQuery Templates – tmpl(), template() and tmplItem() Introduction to jQuery Templates ViewBag dynamic in ASP.NET MVC 3 - RC 2 Today I had a presentation on "Deep Dive into jQuery Templates in ASP.NET" jQuery Data Linking in ASP.NET How do you prefer getting bundles of technologies?? Case-insensitive XPath query search on XML Document in ASP.NET jQuery UI Accordion in ASP.NET MVC - feed with data from database (Part 3) jQuery UI Accordion in ASP.NET WebForms - feed with data from database (Part 2) jQuery UI Accordion in ASP.NET – Client side implementation (Part 1) Using Images embedded in Project’s Assembly Macedonian Code Camp 2010 event has finished successfully Tips and Tricks: Deferred execution using LINQ Using System.Diagnostics.Stopwatch class to measure the elapsed time Speaking at Macedonian Code Camp 2010 URL Routing in ASP.NET 4.0 Web Forms Conflicts between ASP.NET AJAX UpdatePanels & jQuery functions Integration of jQuery DatePicker in ASP.NET Website – Localization (part 3) Why not to use HttpResponse.Close and HttpResponse.End Calculate Business Days using LINQ Get Distinct values of an Array using LINQ Using CodeRun browser-based IDE to create ASP.NET Web Applications Using params keyword – Methods with variable number of parameters Working with Code Snippets in VS.NET  Working with System.IO.Path static class Calculating GridView total using JavaScript/JQuery The new SortedSet<T> Collection in .NET 4.0 JavaScriptSerializer – Dictionary to JSON Serialization and Deserialization Integration of jQuery DatePicker in ASP.NET Website – JS Validation Script (part 2) Integration of jQuery DatePicker in ASP.NET Website (part 1) Transferring large data when using Web Services Forums dedicated to WebMatrix Microsoft WebMatrix – Short overview & installation Working with embedded resources in Project's assembly Debugging ASP.NET Web Services Save and Display YouTube Videos on ASP.NET Website Hello ASP.NET World... In addition, I would like to mention that I have big list of blog posts in CodeASP.NET Community (total 60 blogs) and the local MKDOT.NET Community (total 61 blogs). You may find most of my weblogs.asp.net/hajan blogs posted there too, but there you can find many others. In my blog on MKDOT.NET Community you can find most of my ASP.NET Weblog posts translated in Macedonian language, some of them posted in English and some other blogs that were posted only there. By reading my blogs, I hope you have learnt something new or at least have confirmed your knowledge. And also, if you haven't, I encourage you to start blogging and share your Microsoft Tech. thoughts with all of us... Sharing and spreading knowledge is definitely one of the noblest things which we can do in our life. "Give a man a fish and he will eat for a day. Teach a man to fish and he will eat for a lifetime" HAPPY NEW 2011 YEAR!!! Best Regards, Hajan

    Read the article

  • What I don&rsquo;t like about WIF&rsquo;s Claims-based Authorization

    - by Your DisplayName here!
    In my last post I wrote about what I like about WIF’s proposed approach to authorization – I also said that I definitely would build upon that infrastructure for my own systems. But implementing such a system is a little harder as it could be. Here’s why (and that’s purely my perspective): First of all WIF’s authorization comes in two “modes” Per-request authorization. When an ASP.NET/WCF request comes in, the registered authorization manager gets called. For SOAP the SOAP action gets passed in. For HTTP requests (ASP.NET, WCF REST) the URL and verb. Imperative authorization This happens when you explicitly call the claims authorization API from within your code. There you have full control over the values for action and resource. In ASP.NET per-request authorization is optional (depends on if you have added the ClaimsAuthorizationHttpModule). In WCF you always get the per-request checks as soon as you register the authorization manager in configuration. I personally prefer the imperative authorization because first of all I don’t believe in URL based authorization. Especially in the times of MVC and routing tables, URLs can be easily changed – but then you also have to adjust your authorization logic every time. Also – you typically need more knowledge than a simple “if user x is allowed to invoke operation x”. One problem I have is, both the per-request calls as well as the standard WIF imperative authorization APIs wrap actions and resources in the same claim type. This makes it hard to distinguish between the two authorization modes in your authorization manager. But you typically need that feature to structure your authorization policy evaluation in a clean way. The second problem (which is somehow related to the first one) is the standard API for interacting with the claims authorization manager. The API comes as an attribute (ClaimsPrincipalPermissionAttribute) as well as a class to use programmatically (ClaimsPrincipalPermission). Both only allow to pass in simple strings (which results in the wrapping with standard claim types mentioned earlier). Both throw a SecurityException when the check fails. The attribute is a code access permission attribute (like PrincipalPermission). That means it will always be invoked regardless how you call the code. This may be exactly what you want, or not. In a unit testing situation (like an MVC controller) you typically want to test the logic in the function – not the security check. The good news is, the WIF API is flexible enough that you can build your own infrastructure around their core. For my own projects I implemented the following extensions: A way to invoke the registered claims authorization manager with more overloads, e.g. with different claim types or a complete AuthorizationContext. A new CAS attribute (with the same calling semantics as the built-in one) with custom claim types. A MVC authorization attribute with custom claim types. A way to use branching – as opposed to catching a SecurityException. I will post the code for these various extensions here – so stay tuned.

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • POP Forums v9 Beta 1 for ASP.NET MVC 3 posted to CodePlex!

    - by Jeff
    As promised, I posted a beta build of my forum app for ASP.NET MVC 3. Get the new goodies here: http://popforums.codeplex.com/releases/view/58228 This is the first beta for the ASP.NET MVC 3 version of POP Forums. It is nearly feature complete, and ready for testing and feedback. For previous release notes, look here, here and here.Check out the live preview: http://preview.popforums.com/ForumsSetup instructions are on the home page of this project. The new hotness in the beta, or what has been done since the last preview: All views converted to use Razor E-mail subscription/notification of new posts New post indicators/mark read buttons Permalinks to posts Jump to newest post (from new post indicators) Recent topics Favorite topics Moderator functions for topics (pin/close/delete, plus move and rename) Search, ported from v8. Not a ton of optimization here, or new unit testing, but the old version worked pretty well User posts (topics the user posted in) Forgot password Vanity items (signatures and avatars) Hide vanity items per user preference Some minor data caching where appropriate A little bit of UI refinement Lots-o-bug fixes Lots-o-unit tests What's next? The plan between now and the next beta is as follows: Continue working through features/tasks, and fix bugs as they're reported Integrate the forum into a real, production site Refine the UI Refactor as much as possible... the code organization is not entirely logical in some places After the second beta, a release candidate will follow, with a real "final" release after that. Subsequent releases should come relatively frequently and without a lot of risk. The trick in building this thing has been that it mostly tossed the previous WebForms version, which was all full of crusties. The time table for this is a little harder to pin down, as day jobs and families will have their effect. Other notes Refactoring will be a priority. As the features of MVC have evolved, so have my desires to use it in a fashion that makes things clear and easy to follow. I don't even know if anyone will ever start mucking around in the code, but on the off chance they do, I'd like what they find to not suck. Other nice-to-haves are builds to target Windows Azure and SQL CE. A nice setup UI would be super too. I think the ASP.NET MVC world has gone long enough without a decent forum.The biggest challenge that I've found is making the forum something that can be dropped in any app. While it does rope its views into an area, areas are mostly just routing details. I haven't thought of a clever way yet to limit dependency injection, for example, to just the forum bits. I mean, everyone should be using Ninject, but how realistic is that? ;)How much time and effort should you spend on POP Forums in its current state? Change is inevitable, but at this point I'm reasonably committed to not changing the database schema. I really think it will stay as-is. All bets are off for the various interfaces throughout the app, but the data should generally resist change. It's not even that different from v8, which was one of the original goals because I didn't want to rewrite SQL or introduce a new ORM or whatever. My point is that if you wanted to build a site around this today, even though it's not entirely functional, I think it's low risk in terms of data loss. I can't vouch for whether or not you know what you're doing.I've been having some chats with people lately about quoting posts, and honestly there has to be something better and straight forward. That continues to be a holy grail of mine, and some day, I hope to find it.Enjoy... it's starting to feel more real every day!

    Read the article

  • Optional Parameters and Named Arguments in C# 4 (and a cool scenario w/ ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the seventeenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers two new language feature being added to C# 4.0 – optional parameters and named arguments – as well as a cool way you can take advantage of optional parameters (both in VB and C#) with ASP.NET MVC 2. Optional Parameters in C# 4.0 C# 4.0 now supports using optional parameters with methods, constructors, and indexers (note: VB has supported optional parameters for awhile). Parameters are optional when a default value is specified as part of a declaration.  For example, the method below takes two parameters – a “category” string parameter, and a “pageIndex” integer parameter.  The “pageIndex” parameter has a default value of 0, and as such is an optional parameter: When calling the above method we can explicitly pass two parameters to it: Or we can omit passing the second optional parameter – in which case the default value of 0 will be passed:   Note that VS 2010’s Intellisense indicates when a parameter is optional, as well as what its default value is when statement completion is displayed: Named Arguments and Optional Parameters in C# 4.0 C# 4.0 also now supports the concept of “named arguments”.  This allows you to explicitly name an argument you are passing to a method – instead of just identifying it by argument position.  For example, I could write the code below to explicitly identify the second argument passed to the GetProductsByCategory method by name (making its usage a little more explicit): Named arguments come in very useful when a method supports multiple optional parameters, and you want to specify which arguments you are passing.  For example, below we have a method DoSomething that takes two optional parameters: We could use named arguments to call the above method in any of the below ways: Because both parameters are optional, in cases where only one (or zero) parameters is specified then the default value for any non-specified arguments is passed. ASP.NET MVC 2 and Optional Parameters One nice usage scenario where we can now take advantage of the optional parameter support of VB and C# is with ASP.NET MVC 2’s input binding support to Action methods on Controller classes. For example, consider a scenario where we want to map URLs like “Products/Browse/Beverages” or “Products/Browse/Deserts” to a controller action method.  We could do this by writing a URL routing rule that maps the URLs to a method like so: We could then optionally use a “page” querystring value to indicate whether or not the results displayed by the Browse method should be paged – and if so which page of the results should be displayed.  For example: /Products/Browse/Beverages?page=2. With ASP.NET MVC 1 you would typically handle this scenario by adding a “page” parameter to the action method and make it a nullable int (which means it will be null if the “page” querystring value is not present).  You could then write code like below to convert the nullable int to an int – and assign it a default value if it was not present in the querystring: With ASP.NET MVC 2 you can now take advantage of the optional parameter support in VB and C# to express this behavior more concisely and clearly.  Simply declare the action method parameter as an optional parameter with a default value: C# VB If the “page” value is present in the querystring (e.g. /Products/Browse/Beverages?page=22) then it will be passed to the action method as an integer.  If the “page” value is not in the querystring (e.g. /Products/Browse/Beverages) then the default value of 0 will be passed to the action method.  This makes the code a little more concise and readable. Summary There are a bunch of great new language features coming to both C# and VB with VS 2010.  The above two features (optional parameters and named parameters) are but two of them.  I’ll blog about more in the weeks and months ahead. If you are looking for a good book that summarizes all the language features in C# (including C# 4.0), as well provides a nice summary of the core .NET class libraries, you might also want to check out the newly released C# 4.0 in a Nutshell book from O’Reilly: It does a very nice job of packing a lot of content in an easy to search and find samples format. Hope this helps, Scott

    Read the article

  • Introducing .NET 4.0 with Visual Studio 2010 by Alex Mackey - Book review

    - by Malisa L. Ncube
    Alex (http://simpleisbest.co.uk/) does a very good job in covering the new features of .NET 4.0 and Visual Studio 2010. His focus is on the developers that have experience in development using previous versions of Visual Studio, more specifically Visual Studio 2008.     The following are my views towards his book. 1. Scope / Coverage Even as the book is labeled as introduction, it is covers a broad spectrum of technologies, features and references that are focused into helping a developer quickly decide what to use in the new .NET framework. a. Content The content included covers as much as possible the new additions that are included in the new .NET version 4.0. He shows the Visual Studio 2010 new features and quickly shows how to extend it using Managed Extensibility Framework. Some of my favorites are parallel debugging enhancements. The author delves into JQuery, which Microsoft has decided to support. Some of the very interesting content is on the out-of-band releases including ASP.NET MVC, Windows Azure Silverlight 3 and WCF Data Services. b. What is not included? Windows Phone 7 Series. This was only talked about in the MIX10. The data may not have been available at the time of writing. Microsoft Pinpoint (Microsoft code name "Dallas") Windows Embedded development. c. Table of Contents Chapter 1: Introduction Chapter 2: Visual Studio IDE and MEF Chapter 3: Language and Dynamic Changes Chapter 4: CLR and BCL Changes Chapter 5: Parallelization and Threading Enhancements Chapter 6: Windows Workflow Foundation 4 Chapter 7: Windows Communication Foundation Chapter 8: Entity Framework Chapter 9: WCF Data Services Chapter 10: ASPNET Chapter 11: Microsoft AJAX Library Chapter 12: jQuery Chapter 13: ASPNET MVC Chapter 14: Silverlight Introduction Chapter 15: WPF 4.0 and Silverlight 3.0 Chapter 16: Windows Azure 2. Depth Avoids getting into depth on the topics presented, to present the new concepts in assumption of the developer’s existing knowledge. Code samples are on book and exist mostly as snippets and very easy to follow. There are no downloadable examples. 3. Complexity The book is written in a very simple way and easy to follow. There are no irrelevant intimidating details. So it’s a book that you can grab and never put down until you’ve finished reading the entire book. 4. References The author includes reference links to blogs, Wikis and a lot of online resources including the MSDN documentation, which is a very convenient strategy to avoid flooding the reader with details which may not be of interest to them. Most sites do not use url routing and that is really not nice. There are notes from interviews between the author and people behind the new technologies, in which they explain what some specific areas that need clarifications and what their future views are in relation to the features they are working on. 5. Target The author targets experts that want to make a transition from .NET 3.5 to 4.0. Some obvious 3.5 features have been purposely excluded from the text 6. Overrall It is evident that the author has made extensive research into the breadth of what MS is working on, in relation to .NET and Visual Studio and has also been watching the online community. What I would like to see in the next edition are some details on OData protocol, Expression Blend 4 and Embedded development and Windows Phone development. I should say I’m one of the beneficiaries of this book. Excellent work Alex.   Technorati Tags: .NET,Book-Review,Visual Studio

    Read the article

  • Tuxedo 11gR1 Client Server Affinity

    - by todd.little
    One of the major new features in Oracle Tuxedo 11gR1 is the ability to define an affinity between clients and servers. In previous releases of Tuxedo, the only way to ensure that multiple requests from a client went to the same server was to establish a conversation with tpconnect() and then use tpsend() and tprecv(). Although this works it has some drawbacks. First for single-threaded servers, the server is tied up for the entire duration of the conversation and cannot service other clients, an obvious scalability issue. I believe the more significant drawback is that the application programmer has to switch from the simple request/response model provided by tpcall() to the half duplex tpsend() and tprecv() calls used with conversations. Switching between the two typically requires a fair amount of redesign and recoding. The Client Server Affinity feature in Tuxedo 11gR1 allows by way of configuration an application to define affinities that can exist between clients and servers. This is done in the *SERVICES section of the UBBCONFIG file. Using new parameters for services defined in the *SERVICES section, customers can determine when an affinity session is created or deleted, the scope of the affinity, and whether requests can be routed outside the affinity scope. The AFFINITYSCOPE parameter can be MACHINE, GROUP, or SERVER, meaning that while the affinity session is in place, all requests from the client will be routed to the same MACHINE, GROUP, or SERVER. The creation and deletion of affinity is defined by the SESSIONROLE parameter and a service can be defined as either BEGIN, END, or NONE, where BEGIN starts an affinity session, END deletes the affinity session, and NONE does not impact the affinity session. Finally customers can define how strictly they want the affinity scope adhered to using the AFFINITYSTRICT parameter. If set to MANDATORY, all requests made during an affinity session will be routed to a server in the affinity scope. Thus if the affinity scope is SERVER, all subsequent tpcall() requests will be sent to the same server the affinity scope was established with. If the server doesn't offer that service, even though other servers do offer the service, the call will fail with TPNOENT. Setting AFFINITYSTRICT to PRECEDENT tells Tuxedo to try and route the request to a server in the affinity scope, but if that's not possible, then Tuxedo can try to route the request to servers out of scope. All of this begs the question, why? Why have this feature? There many uses for this capability, but the most common is when there is state that is maintained in a server, group of servers, or in a machine and subsequent requests from a client must be routed to where that state is maintained. This might be something as simple as a database cursor maintained by a server on behalf of a client. Alternatively it might be that the server has a connection to an external system and subsequent requests need to go back to the server that has that connection. A more sophisticated case is where a group of servers maintains some sort of cache in shared memory and subsequent requests need to be routed to where the cache is maintained. Although this last case might be able to be handled by data dependent routing, using client server affinity allows the cache to be partitioned dynamically instead of statically.

    Read the article

  • OSI Model

    - by kaleidoscope
    The Open System Interconnection Reference Model (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives service from the layer below it. Description of OSI layers: Layer 1: Physical Layer ·         Defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. ·         Establishment and termination of a connection to a communications medium. ·         Participation in the process whereby the communication resources are effectively shared among multiple users. ·         Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. Layer 2: Data Link Layer ·         Provides the functional and procedural means to transfer data between network entities. ·         Detect and possibly correct errors that may occur in the Physical Layer. The error check is performed using Frame Check Sequence (FCS). ·         Addresses is then sought to see if it needs to process the rest of the frame itself or whether to pass it on to another host. ·         The Layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. ·         MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. ·         LLC layer controls frame synchronization, flow control and error checking.   Layer 3: Network Layer ·         Provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks. ·         Performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. ·         Network Layer Routers operate at this layer—sending data throughout the extended network and making the Internet possible.   Layer 4: Transport Layer ·         Provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. ·         Controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. ·         Transport Layer can keep track of the segments and retransmit those that fail. Layer 5: Session Layer ·         Controls the dialogues (connections) between computers. ·         Establishes, manages and terminates the connections between the local and remote application. ·         Provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. ·         Implemented explicitly in application environments that use remote procedure calls. Layer 6: Presentation Layer ·         Establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. ·         Provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer. Layer 7: Application Layer ·         This layer interacts with software applications that implement a communicating component. ·         Identifies communication partners, determines resource availability, and synchronizes communication. o       When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. o       When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. o       In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Technorati Tags: Kunal,OSI,Networking

    Read the article

  • Odd MVC 4 Beta Razor Designer Issue

    - by Rick Strahl
    This post is a small cry for help along with an explanation of a problem that is hard to describe on twitter or even a connect bug and written in hopes somebody has seen this before and any ideas on what might cause this. Lots of helpful people had comments on Twitter for me, but they all assumed that the code doesn't run, which is not the case - it's a designer issue. A few days ago I started getting some odd problems in my MVC 4 designer for an app I've been working on for the past 2 weeks. Basically the MVC 4 Razor designer keeps popping me errors, about the call signature to various Html Helper methods being incorrect. It also complains about the ViewBag object and not supporting dynamic requesting to load assemblies into the project. Here's what the designer errors look like: You can see the red error underlines under the ViewBag and an Html Helper I plopped in at the top to demonstrate the behavior. Basically any HtmlHelper I'm accessing is showing the same errors. Note that the code *runs just fine* - it's just the designer that is complaining with Errors. What's odd about this is that *I think* this started only a few days ago and nothing consequential that I can think of has happened to the project or overall installations. These errors aren't critical since the code runs but pretty annoying especially if you're building and have .csHtml files open in Visual Studio mixing these fake errors with real compiler errors. What I've checked Looking at the errors it indeed looks like certain references are missing. I can't make sense of the Html Helpers error, but certainly the ViewBag dynamic error looks like System.Core or Microsoft.CSharp assemblies are missing. Rest assured they are there and the code DOES run just fine at runtime. This is a designer issue only. I went ahead and checked the namespaces that MVC has access to in Razor which lives in the Views folder's web.config file: /Views/web.config For good measure I added <system.web.webPages.razor> <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, <split for layout> Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <pages pageBaseType="System.Web.Mvc.WebViewPage"> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Linq" /> <add namespace="System.Linq.Expressions" /> <add namespace="ClassifiedsBusiness" /> <add namespace="ClassifiedsWeb"/> <add namespace="Westwind.Utilities" /> <add namespace="Westwind.Web" /> <add namespace="Westwind.Web.Mvc" /> </namespaces> </pages> </system.web.webPages.razor> For good measure I added System.Linq and System.Linq.Expression on which some of the Html.xxxxFor() methods rely, but no luck. So, has anybody seen this before? Any ideas on what might be causing these issues only at design time rather, when the final compiled code runs just fine?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Razor  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Wireless internet is connected to an open network but has no internet

    - by Joshua Reeder
    I just installed Ubuntu on my laptop yesterday and it connected to the wireless fine. Then I took it to school, put it on their wired connection, downloaded some stuff, and now the wireless doesn't work. At first it would detect networks, but not connect. I restarted it and now it can connect, but it acts like it doesn't have internet in the browser. Wired connection still works fine on it. I know it isn't the network because my ipad is working on the wireless connection fine. I found another solution on here switching the security settings for the wireless, but this is the apartment's wireless so they have it open, and I won't be able to mess with it at all. Here is lspci output: 00:00.0 Host bridge: Intel Corporation Core Processor DMI (rev 11) 00:03.0 PCI bridge: Intel Corporation Core Processor PCI Express Root Port 1 (rev 11) 00:08.0 System peripheral: Intel Corporation Core Processor System Management Registers (rev 11) 00:08.1 System peripheral: Intel Corporation Core Processor Semaphore and Scratchpad Registers (rev 11) 00:08.2 System peripheral: Intel Corporation Core Processor System Control and Status Registers (rev 11) 00:08.3 System peripheral: Intel Corporation Core Processor Miscellaneous Registers (rev 11) 00:10.0 System peripheral: Intel Corporation Core Processor QPI Link (rev 11) 00:10.1 System peripheral: Intel Corporation Core Processor QPI Routing and Protocol Registers (rev 11) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 05) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 05) 00:1c.3 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 4 (rev 05) 00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 05) 00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [GeForce 310M] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05) 07:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8191SEvB Wireless LAN Controller (rev 10) 16:00.0 System peripheral: JMicron Technology Corp. SD/MMC Host Controller (rev 20) 16:00.2 SD Host controller: JMicron Technology Corp. Standard SD Host Controller (rev 20) 16:00.3 System peripheral: JMicron Technology Corp. MS Host Controller (rev 20) 16:00.4 System peripheral: JMicron Technology Corp. xD Host Controller (rev 20) ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-Core Registers (rev 04) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 04) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 04) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 04) ff:03.0 Host bridge: Intel Corporation Core Processor Integrated Memory Controller (rev 04) ff:03.1 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Target Address Decoder (rev 04) ff:03.4 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Test Registers (rev 04) ff:04.0 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Control Registers (rev 04) ff:04.1 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Address Registers (rev 04) ff:04.2 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Rank Registers (rev 04) ff:04.3 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Thermal Control Registers (rev 04) ff:05.0 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Control Registers (rev 04) ff:05.1 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Address Registers (rev 04) ff:05.2 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Rank Registers (rev 04) ff:05.3 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Thermal Control Registers (rev 04) Update: I re-installed Ubuntu 12.04 (I assumed I messed something up while toying with it) but it did not solve the problem. Eventually, I got it to work with my school's wireless internet (the default network settings were wrong), but the internet still doesn't work on my apartment's wifi (it has no security on it).

    Read the article

  • The type 'XXX' is defined in an assembly that is not referenced exception after upgrade to ASP.NET 4

    - by imran_ku07
       Introduction :          I found two posts in ASP.NET MVC forums complaining that they are getting exception, The type XXX is defined in an assembly that is not referenced, after upgrading thier application into Visual Studio 2010 and .NET Framework 4.0 at here and here .   Description :           The reason why they are getting the above exception is the use of new clean web.config without referencing the assemblies which were presents in ASP.NET 3.5 web.config. The quick solution for this problem is to add the old assemblies in new web.config.          <assemblies>             <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>             <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>             <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>              <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />              <add assembly="System.Data.Linq, Version=4.0.0.0, Culture=neutral, publicKeyToken=b77a5c561934e089" />          </assemblies>    How It works :            Currently i have not tested the above scenario in ASP.NET 4.0 because i have not yet get it. But the above scenario can easily be tested and verified in VS 2008. Just Open the root web.config and remove           <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>             Even you add the reference of System.Core in your project, you will still get the above exception because aspx pages are compiled in separate assembly. You can check this yourself by checking Show Detailed Compiler Output: below in the yellow screen of death, you will find something,/out:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\e907aee4\5fa0acc8\App_Web_y5rd6bdg.dll"             This shows that aspx pages are compiled in separate assembly in Temporary ASP.NET Files.Summary :             After getting the above exception make sure to add the assemblies in web.config or add the Assembly directive at Page level. Hopefully this will helps to solve your problem.       

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • 30 Steps to Master ASP.NET MVC Application development

    - by Rajesh Pillai
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Welcome Readers!,   I am starting out a new series on ASP.NET  MVC skill building which will be posted over the next couple of weeks.  Let me know your thoughts on the content, which I have planned and a couple of them has been taken from ASP.NET MVC2 Cookbook. (NOTE: Only the heading has been taken, the content will be not :)).   Do let me know what you would like to see, or any additional inputs or ideas to cover in this topics.  The 30 steps are oultined below for quick reference.  Will start filling this out quickly.   Outlined is the ‘30’ step to master ASP.NET MVC.   A Peek Into Model What is a model? Different types of model Presentation/ViewModel Model Mapping (AutoMapper)   A Peak into View How view works in ASP.NET MVC? View Engine Design Custom View Engine View Best Practices Templated Helpers Partial Views   A Peak into Controller Introduction Controller Design Controller Best Practices Asynchronous Controller Custom Action Result Action Filters Controller Factory to use with IOC   Routes Explanation Routes from the database Routes from XML More complex routing   Master Pages Basics Setting Master Page Dynamically   Working with data in the view Repeating Views Array of check boxes Array of radio buttons Paged data CRUD Client side action Confirmation Dialog (modal window) jqGrid   Working with Forms   Validation Model Validation with DataAnnotations Using the xVal validation framework Client side validation with jQuery Validation Fluent Validation Model Binders   Templating Create strongly typed helper using T4 Custom View Templates with T4 Create custom MVC project template using T4   IOC AutoFac Ninject Unity Application   Areas   jQuery, Ajax and jQuery Plugins   State Maintenance Application State User state Cookies Webfarm   Error Handling View error handling Controller error handling ELMAH (Error Logging Modules and Handlers)   Authentication and Authorization User Registration form SignOn Process Password Reminder Membership and Roles Windows authentication Restricting access to all pages Restricting access to selected pages Restricting access to pages by role Restricting access to a controller Restricting access to selected area   Profiles and Themes Using Profiles Inheriting a Profile Migrating an anonymous profile Creating custom themes Using themes User personalized themes   Configuration Adding custom application settings in web.config Displaying custom error messages Accessing other web.config configuration elements Adding custom configuration elements to web.config Encrypting web.config sections   Tracing, Debugging and Logging   Caching Caching a whole page Caching pages based on route details Caching pages based on browser type and version Caching pages based custom strings Caching partial pages Caching application data Object Caching Using Microsoft Velocity Using MemCache Using AppFabric cache   Localization   HTTP Handlers and Modules   Security XSS/CSRF AnitForgery Encoding   HtmlHelpers Strongly typed helpers Writing custom helpers   Repository Pattern (Data access)   WF/WCF   Unit Testing   Mocking Framework   Integration Testing   Load / Performance Testing   Deployment    Once again let me know your thoughts on this.   Till then, Enjoy MVC'ing!!!

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Creating a new naming context in OUD

    - by Sylvain Duloutre
    A naming context (also known as a directory suffix) is a DN that identifies the top entry in a locally held directory hierarchy. A new naming context can be created using ODSM, the OUD gui admin console, as described in http://docs.oracle.com/cd/E29407_01/admin.111200/e22648/server_config.htm#CBDGCJGF It can also be created using the dsconfig command lione as described below: Creation of a new naming context consists in 3 steps: First create a Local Backend Workflow element (myNewDb in this exemple) ,  responsible for the naming context base dn, e.g o=example. dsconfig create-workflow-element \           --set base-dn:o=example \           --set enabled:true \           --type db-local-backend \           --element-name myNewDb \           --hostname <your host> \           --port <admin port> \           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Second, create a Workflow element (workFlowForMyNewDb in this exemple) associated with the Local Backend Workflow element. WorkFlow elements are used to route LDAP requests to the appropriate database, based on the target base dn. dsconfig create-workflow \           --set base-dn:o=example \           --set enabled:true \           --set workflow-element:myNewDb \           --type generic \           --workflow-name workFlowForMyNewDb \           --hostname <your host name> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Then, the workflow element must be made visible outside of the directory, i.e added to the internal "routing table". This is done by adding the Workflow to the appropriate Network Group. A Network group  is used to classify incoming client connections and route requests to workflows. dsconfig set-network-group-prop \           --group-name network-group \           --add workflow:workFlowForMyNewDb \           --hostname <your hostname> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt At that stage, it is possible to import entries to the new naming context o=example.

    Read the article

  • Stir Trek 2: Iron Man Edition

    Next month (7 May 2010) Ill be presenting at the second annual Stir Trek event in Columbus, Ohio. Stir Trek (so named because last year its themes mixed MIX and the opening of the Star Trek movie) is a very cool local event.  Its a lot of fun to present at and to attend, because of its unique venue: a movie theater.  And whats more, the cost of admission includes a private showing of a new movie (this year: Iron Man 2).  The sessions cover a variety of topics (not just Microsoft), similar to CodeMash.  The event recently sold out, so Im not telling you all of this so that you can go and sign up (though I believe you can get on the waitlist still).  Rather, this is pretty much just an excuse for me to talk about my session as a way to organize my thoughts. Im actually speaking on the same topic as I did last year, but the key difference is that last year the subject of my session was nowhere close to being released, and this year, its RTM (as of last week).  Thats right, the topic is Whats New in ASP.NET 4 how did you guess? Whats New in ASP.NET 4 So, just what *is* new in ASP.NET 4?  Hasnt Microsoft been spending all of their time on Silverlight and MVC the last few years?  Well, actually, no.  There are some pretty cool things that are now available out of the box in ASP.NET 4.  Theres a nice summary of the new features on MSDN.  Here is my super-brief summary: Extensible Output Caching use providers like distributed cache or file system cache Preload Web Applications IIS 7.5 only; avoid the startup tax for your site by preloading it. Permanent (301) Redirects are finally supported by the framework in one line of code, not two. Session State Compression Can speed up session access in a web farm environment.  Test it to see. Web Forms Features several of which mirror ASP.NET MVC advantages (viewstate, control ids) Set Meta Keywords and Description easily Granular and inheritable control over ViewState Support for more recent browsers and devices Routing (introduced in 3.5 SP1) some new features and zero web.config changes required Client ID control makes client manipulation of DOM elements much simpler. Row Selection in Data Controls fixed (id based, not row index based) FormView and ListView enhancements (less markup, more CSS compliant) New QueryExtender control makes filtering data from other Data Source Controls easy More CSS and Accessibility support Reduction of Tables and more control over output for other template controls Dynamic Data enhancements More control templates Support for inheritance in the Data Model New Attributes ASP.NET Chart Control (learn more) Lots of IDE enhancements Web Deploy tool My session will cover many but not all of these features.  Theres only an hour (3pm-4pm), and its right before the prize giveaway and movie showing, so Ill be moving quickly and most likely answering questions off-line via email after the talk. Hope to see you there! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >