Search Results

Search found 4721 results on 189 pages for 'traffic'.

Page 151/189 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • The Loneliest Road in America and the OTN Garage

    - by rickramsey
    Source I never told anyone how the image of the OTN Garage on Facebook came to be. I took the Facebook picture on Route 50 in Nevada, USA, in October of 2010. I was riding from Colorado to Oracle OpenWorld in San Francisco, so it was probably October. Route 50 is known as "The Loneliest Road in America." There are roads across Nevada that have even LESS traffic, but Route 50 still one. desolate. road. Although I have seen stranger things while riding along Nevada's Extraterrestrial Highway, I still run across notable oddities every time I ride Route 50. Like the old man with a bandolero of water bottles jogging along the side of the highway in the middle of the day, 50 miles from the closest town. First ultra-marathoner I'd seen in action. He waved at me. Or the dozen Corvettes with California license plates driving toward me, all doing the speed limit in the middle of nowhere because they were being tailed by half a dozen Nevada state troopers. #fail. I don't remember which town I was in, but I noticed the building when I stopped at the gas station. While standing there pouring fuel into the Harley, the store caught my eye. So I pulled the bike in front and walked inside. The owner is a little old lady, about 100 years old. Most of the goods she had on the shelves looked like they had been placed there during WWII. She was itty bitty and could barely see over the counter, but she was so happy when I bought a bar of Hershey's chocolate that she gave me a five cent discount. I took a few pictures and, when I got back, Kemer Thomson, who sometimes blogs here, photoshopped the OTN Garage and Oil Change signs onto it. The bike is a 2009 Road King Classic with a Bob Dron fairing and a Corbin heated seat. The seat came in handy when I rode home over Tioga Pass. The Road King is a very comfy touring bike with a great Harley rumble. I'm kinda sorry I sold it. When I stopped for fuel about 75 miles down the road at the next town, I peeled back the chocolate bar. I had turned into powder. Probably 50 years ago. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Idea to develop a caching server between IIS and SQL Server

    - by John
    I work on a few high traffic websites that all share the same database and that are all heavily database driven. Our SQL server is max-ed out and, although we have already implemented many changes that have helped but the server is still working too hard. We employ some caching in our website but the type of queries we use negate using SQL dependency caching. We tried SQL replication to try and kind of load balance but that didn't prove very successful because the replication process is quite demanding on the servers too and it needed to be done frequently as it is important that data is up to date. We do use a Varnish web caching server (Linux based) to take a bit of the load off both the web and database server but as a lot of the sites are customised based on the user we can only do so much. Anyway, the reason for this question... Varnish gave me an idea for a possible application that might help in this situation. Just like Varnish sits between a web browser and the web server and caches response from the web server, I was wondering about the possibility of creating something that sits between the web server and the database server. Imagine that all SQL queries go through this SQL caching server. If it's a first time query then it will get recorded, and the result requested from the SQL server and stored locally on the cache server. If it's a repeat request within a set time then the result gets retrieved from the local copy without the query being sent to the SQL server. The caching server could also take advantage of SQL dependency caching notifications. This seems like a good idea in theory. There's still the same amount of data moving back and forward from the web server, but the SQL server is relieved of the work of processing the repeat queries. I wonder about how difficult it would be to build a service that sort of emulates requests and responses from SQL server, whether SQL server's own caching is doing enough of this already that this wouldn't be a benefit, or even if someone has done this before and I haven't found it? I would welcome any feedback or any references to any relevant projects.

    Read the article

  • What are the options for hosting a small Plone site? [closed]

    - by Tina Russell
    Possible Duplicate: How to find web hosting that meets my requirements? I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • Silverlight Cream for December 26, 2010 -- #1015

    - by Dave Campbell
    In this all-submittal Issue: Michael Washington(-2-), Ian T. Lackey(-2-, -3-), Sandrino Di Mattia, Colin Eberhardt(-2-), and Antoni Dol. Above the Fold: Silverlight: "A Style for the Silverlight CoverFlow Control Slider" Antoni Dol WP7: "Getting the right behaviors in your Phone 7 App – Part 1 Phone Home" (and the other two parts) Ian T. Lackey Silverlight/WPF: "A Simplified Grid Markup for Silverlight and WPF" Colin Eberhardt Shoutouts: Dennis Doomen has updated his Coding Guidelines and provided a new WhitePaper, A4 cheat sheet, and VS2010 rule sets: December Update of the Coding Guidelines for C# 3.0 and C# 4.0 From SilverlightCream.com: Windows Phone 7: Saving Data when Keyboard is visible MIchael Washington takes a possible desktop approach to a data-saving issue on WP7... good solution, and one of the commenters brought up another. Windows Phone 7 View Model (MVVM) ApplicationBar Since I'm catching up, there's another post by Michael Washington... this one is looking at the WP7 ApplicationBar, and issues if you're trying to stay MVVM-proper. Michael gets around that by creating the AppBar with a behavior, and shares with all of us! Getting the right behaviors in your Phone 7 App – Part 1 Phone Home Ian T. Lackey has begun a series where he's packaging common tasks into reusable behaviors. First up is a phone dialer launching action that can be dropped on any control in your app. Getting the right behaviors in your Phone 7 App – Part 2 Binding & Browsing In his next post, Ian T. Lackey digs into the WebBrowserTask and provides a behavior allowing you to launch a browser session straight to an URL from any WP7 control. Getting the right behaviors in your Phone 7 App – Part 3 Email ‘em In his last post (all in one day), Ian T. Lackey looks at EmailComposeTask, ending up with a behavior to pre-populate EmailAddress and Subject. Cracking a Microsoft contest or why Silverlight-WCF security is important Sandrino Di Mattia was working on an app while also having a page up for a MSDN/TechNet game, and noticed some interesting WCF traffic that he was easily able to get access to. A Simplified Grid Markup for Silverlight and WPF Colin Eberhardt built us all an attached property for the Grid control that bails us out from the ugly layout we always have to put into position... oh, also for WPF! #uksnow #silverlight The Movie! – Happy Christmas Colin Eberhardt also took some time to have fun with his Twitter/BingMaps mashup for the UKSnow hashtag... you can now playback the snowfall reports, and mouse-over the snowflakes to see the original tweet... very cool stuff, Colin! A Style for the Silverlight CoverFlow Control Slider Antoni Dol got tired of the Silverlight Slider in the CoverFlow control and crafted a very nice-looking style for the Slider ... check it out and grab the source. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Recap: Oracle at the Gartner Business Intelligence Summit

    - by kimberly.billings
    Getting to Vegas was no fun. As anyone who lives in the Bay Area knows, the SF airport shuts down one runway when it rains, causing major havoc. So rain, rain, rain on Sunday meant delay, delay, delay at the airport. Needless to say, my 6:30 pm flight didn't land in Vegas until 3:00 am! But the travel pains were worth it. There was a lot to be learned at the Gartner BI Summit this year, and the uptick in attendance was reflected in strong booth traffic and engaging conversations in the Oracle booth. Oracle customer, Dawn Conant, Director, Business Intelligence at Beckman Coulter, generated a lot of interest in her presentation about migrating from Business Objects to Oracle Business Intelligence, Enterprise Edition with Oracle Database 11g. Dawn's story was a very relatable one, as many of the attendees had plans for similar projects. One of the most interesting Gartner-led sessions compared BI/DW megavendors, IBM, Oracle, SAP and Microsoft. According to Gartner analyst Rita Sallam, these megavendors control about two-thirds of the BI market. Sallem attributes this in part to the fact that organizations are expanding their definitions of BI to also include analytics and performance management. In doing so, they require greater integration of BI applications with a broader set of applications and middleware. In a related session, a panel of Gartner analysts compared the Magic Quadrants for BI Platforms; CPM; Data Quality; Data Integration Tools; and Data Warehouses. Oracle is a leader in all of the Magic Quadrants in which it participates and has the most complete stack including hardware and software, according to Donald Feinberg. Feinberg also commented that in situations with VLDW and solid mixed workloads, Oracle Exadata is making a big difference! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Extreme Performance and Scale Delivered by SOA on Oracle Exalogic

    - by J Swaroop
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Demands to incorporate internet-scale applications, data, and social media traffic with existing IT infrastructure require extreme availability, reliability, and scalability. In this session on industrial-strength SOA, learn how Oracle Exalogic and Oracle Exadata engineered systems address these requirements. Topics covered: (1) how SOA and BPM benefit from “hardware and software engineered for each other,” (2) how Oracle Exadata provides the data tier with unparalleled scalability and performance for SOA and BPM running on Oracle Exalogic (3) customer case studies (4) best practices and topology guidelines (5) information on tools that help operate, manage, provision, and deploy—to help reduce overall TCO. Extreme engineering at its best! Session details: 10/2/12 (Tuesday) 11:45 AM - Moscone South -308

    Read the article

  • ETPM Environment Health Monitoring Tools

    - by Paula Speranza-Hadley
    This post is to provide some useful information about the tools typically used by Oracle ETPM implementations for performance tuning and analysis.   This includes tools to monitor and gather performance information and statistics on the Database, Application Server, and Client (browser).  Enterprise Monitoring Tools Oracle Enterprise Manager - OEM Grid Control comes with a comprehensive set of performance and health metrics that allow monitoring of key components in your environment such as applications, application servers, databases, as well as the back-end components on which they rely, such as hosts, operating systems and storage. Tools for the Database Oracle Diagnostics Pack Automatic Workload Repository (AWR)  - this tool gets statistics from memory abut the Time Model or DB Time, Wait Events, Active Session History and High Load SWL queries Automatic Database Diagnostic Monitor (ADDM) - This self-diagnostic software is built into the database.  It examines and analyzes data captured in AWR to dertermine possible performance issues.  It locates the root cause of the issue, provides recommendations for correcting the issues and qualifies the expected benefit. Oracle Database Tuning Pack SQL Tuning Advisor - This enables you to submit one or more SQL statements as input and receive output in the form of specific advice or recommendations on how to tune statements.  The recommendation relates to collection of statistics on objects, creation on new indexes and restructuring of SQL statements. SQL Access Advisor - This enables you to optimize data access paths of SQL queries by recommending a proper set of materialized views, indexes and partitions for a given SQL workload. Tools for the Application Server Weblogic Console - is a web-based, user interface used to configure and control a set of WebLogic servers or clusters (i.e. a "domain").  In any logical group of WebLogic servers there must exist one admin server, which hosts the WebLogic Admin console application and manages the associated configuratoin files. WebLogic Administrators will use the Administration Console for a number of tasks, including: Starting and stopping WebLogic servers or entire clusters. Configuring server parameters, security, database connections and deployed applications. Viewing server status, health and metrics. Yourkit for Profiling - helps analyze synchronization issues, including: Which threads were calling wait(), and for how long Which threads were blocked on attempt to acquire a monitor held by another thread (synchronized methods/blocks), and for how long Tools for the Client Fiddler - allows you to inspect traffic logs, debug and set breakpoints. Firebug – allows you to inspect and edit HTML, monitor network activity and debug JavaScript

    Read the article

  • Avoid SEO loss after URL structure change

    - by Eric Nguyen
    We recently re-wrote our site from Umbraco to WordPress. This has been done by third-party developers. I have been the project manager and it is my mistake that I haven't notice the change of URLs that affect SEO until now. New site was launch last Thursday. The old URL for a "place" (a WordPress custom post type, in case you're WordPress expert and want/ need to point me to another discussion on WP Stackexchange) page is as follows: ourdomain.com/singapore/central/alexandra/an-interesting-place Now it has been changed to ourdomain.com/places/an-interesting-place I have already requested the third-party developers to work rewriting the URLs to emulate the old URL structure. However, it's taking quite a lot of time (we have multiple custom post types e.g. events etc. so it might be complicated; the developers seem quite by blur when I first mentioned rewriting URLs for the custom post types) In the meantime, I wonder if there is a quicker work around for this 1) Use .htaccess to rewrite ourdomain.com/singapore/central/alexandra/an-interesting-place to ourdomain.com/places/an-interesting-place This should avoid 90% loss of the search traffic. I suppose I can learn how to do this quite quickly but no harm mentioning it here 2) Use rel="canonical" to indicate that ourdomain.com/places/an-interesting-place is the exact duplicate of ourdomain.com/singapore/central/alexandra/an-interesting-place I will definitely go for both approaches (and also I'm changing 404 page to cater for this temporary isue) but I wonder if 2) is even feasible and if I have missed anything. Is there anything else you could recommend me in this situation. Let me know if my question is not clear anywhere. Clarifications The old website is on a Windows Server EC2 completely separated from the Linux EC2 instance on which the new site is running. In addition, the same domain "ourdomain.com" is used here (an A record is used to point to an EC2 Elastic IP). Therefore, the old server is completely inaccessible at the moment, unless you we use the IP address to old server (which doesn't help me at all in this case). Even if the old server is accessible, I can't see where one can put the .htaccess or a HTML file to do 301 redirect here. Unless I'm successful with my approach 1) or the developers can rewrite the URLs with coding, 404 page is really a choice for me.

    Read the article

  • Extending Currying: Partial Functions in Javascript

    - by kerry
    Last week I posted about function currying in javascript.  This week I am taking it a step further by adding the ability to call partial functions. Suppose we have a graphing application that will pull data via Ajax and perform some calculation to update a graph.  Using a method with the signature ‘updateGraph(id,value)’. To do this, we have do something like this: 1: for(var i=0;i<objects.length;i++) { 2: Ajax.request('/some/data',{id:objects[i].id},function(json) { 3: updateGraph(json.id, json.value); 4: } 5: } This works fine.  But, using this method we need to return the id in the json response from the server.  This works fine, but is not that elegant and increase network traffic. Using partial function currying we can bind the id parameter and add the second parameter later (when returning from the asynchronous call).  To do this, we will need the updated curry method.  I have added support for sending additional parameters at runtime for curried methods. 1: Function.prototype.curry = function(scope) { 2: scope = scope || window 3: var args = []; 4: for (var i=1, len = arguments.length; i < len; ++i) { 5: args.push(arguments[i]); 6: } 7: var m = this; 8: return function() { 9: for (var i=0, len = arguments.length; i < len; ++i) { 10: args.push(arguments[i]); 11: } 12: return m.apply(scope, args); 13: }; 14: } To partially curry this method we will call the curry method with the id parameter, then the request will callback on it with just the value.  Any additional parameters are appended to the method call. 1: for(var i=0;i<objects.length;i++) { 2: var id=objects[i].id; 3: Ajax.request('/some/data',{id: id}, updateGraph.curry(id)); 4: } As you can see, partial currying gives is a very useful tool and this simple method should be a part of every developer’s toolbox.

    Read the article

  • Windows Azure Training Kit October 2012 Release

    - by Clint Edmonson
    The Windows Azure Technical Evangelism team have been busy bees lately and we want to share with you what they’ve been working on. As you know we release the Windows Azure Training Kit on a regular cadence, so I’m pleased to announce the Windows Azure Training Kit October 2012 Release. This update of the training kit includes 47 hands-on labs, 24 demos and 38 presentations designed to help you learn how to build applications that use Windows Azure services, including updated hands-on labs to use the latest version of Visual Studio 2012 and Windows 8, new demos and presentations. Essential Links: Windows Azure Training Kit Download Windows Azure Training Kit Github [Issues] Updated Presentations With Speaker Notes Your voices were heard loud and clear! I am excited to announce Speaker Notes have been added to a the majority of the content we have available. Find the new updated decks which contain speaker notes below: Foundation SQL Federation Virtual Machine Overview Virtual Networks Windows 8 and Windows Azure Web Sites Windows Azure Cloud Services Windows Azure Overview Windows Azure Service Bus Deploying Active Directory Building Apps With IaaS and PaaS Identity and Access Control Linux Virtual Machines Managing Virtual Machines PowerShell Migrating Apps and Workloads Scalable Global and Highly Available Apps Security and Identity SQL Database SQL Database Migration Cloud Service Life Cycle DevCamps Cloud Services iOS, Android and Windows Azure Windows 8 and Windows Azure Web Sites Windows 8 and Windows Azure Mobile Services Added Localized Content Due to the excitement in the community surrounding the mobile services launch, it was apparent that we needed to make localized content available to continue to deliver the exciting message around Windows Azure Mobile Services. Localized content is available in the following languages: French Japanese German Chinese (Taiwan) Spanish Italian Korean Portuguese (Brazilian) Russian Updated Hands-On Labs To support those who have upgraded to Visual Studio 2012 or those trying out the Visual Studio 2012 Express Editions, we have made sure that the content is available and supported (selected labs only) in Visual Studio 2012 Express and up. Visual Studio 2012 Windows Azure Traffic Manager Introduction to Cloud Services Service Bus Messaging Introduction to Access Control Service This adds a significant amount of additional content, so we have revamped the Hands-On Lab Navigation page to include subsections for Visual Studio 2012 Labs, Visual Studio 2010 Labs, Open Source Labs, Scenario Labs, All Labs. Added Demos Demos are available for a number of presentations which are available in Foundation, DevCamp, ITPro Event & Device + Service DevCamps. You can browse through the demos on the respective Demo Navigation page or on Github (links provided in Demo listing below). HelloASP Connecting Cloud Services Service Bus Relay Windows 8 and Mobile Services URL Shortener iOS Client Migrating a Web Farm Deploying Active Directory URL Shortener Service  (PHP) Geo-Location Service (PHP) Geo-Location Android Client Getting Started with VMs Load Balancing Availability Deploying Hybrid Apps Migrate VM AppController Geo-Location iOS Client Scale Up/Down Using CSUpload URL Shortener Android Client Imaging Virtual Machines The Windows Azure Training Kit is open source and available on GitHub, enabling you in the community to Report Issues or Fork and either extend the solution or commit bug fixes back to the Training Kit. You can find out more details about  the training kit from our GitHub Page including guidelines on how to commit back to the project. Stay tuned to my twitter feed for Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • Why do some user agents have spam urls in them?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Implementing the Reactive Manifesto with Azure and AWS

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/31/implementing-the-reactive-manifesto-with-azure-and-aws.aspxMy latest Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS has just been published! I’d planned to do a course on dual-running a messaging-based solution in Azure and AWS for super-high availability and scale, and the Reactive Manifesto encapsulates exactly what I wanted to do. A “reactive” application describes an architecture which is inherently resilient and scalable, being event-driven at the core, and using asynchronous communication between components. In the course, I compare that architecture to a classic n-tier approach, and go on to build out an app which exhibits all the reactive traits: responsive, event-driven, scalable and resilient. I use a suite of technologies which are enablers for all those traits: ASP.NET SignalR for presentation, with server push notifications to the user Messaging in the middle layer for asynchronous communication between presentation and compute Azure Service Bus Queues and Topics AWS Simple Queue Service AWS Simple Notification Service MongoDB at the storage layer for easy HA and scale, with minimal locking under load. Starting with a couple of console apps to demonstrate message sending, I build the solution up over 7 modules, deploying to Azure and AWS and running the app across both clouds concurrently for the whole stack - web servers, messaging infrastructure, message handlers and database servers. I demonstrating failover by killing off bits of infrastructure, and show how a reactive app deployed across two clouds can survive machine failure, data centre failure and even whole cloud failure. The course finishes by configuring auto-scaling in AWS and Azure for the compute and presentation layers, and running a load test with blitz.io. The test pushes masses of load into the app, which is deployed across four data centres in Azure and AWS, and the infrastructure scales up seamlessly to meet the load – the blitz report is pretty impressive: That’s 99.9% success rate for hits to the website, with the potential to serve over 36,000,000 hits per day – all from a few hours’ build time, and a fairly limited set of auto-scale configurations. When the load stops, the infrastructure scales back down again to a minimal set of servers for high availability, so the app doesn’t cost much to host unless it’s getting a lot of traffic. This is my third course for Pluralsight, with Nginx and PHP Fundamentals and Caching in the .NET Stack: Inside-Out released earlier this year. Now that it’s out, I’m starting on the fourth one, which is focused on C#, and should be out by the end of the year.

    Read the article

  • How to deal with malicious domain redirections?

    - by user359650
    It is possible for anybody to buy a domain name containing negative terms and point it to someone's website in order to damage their reputation. For instance someone could buy the domain child-pornography.com and point it to the address 64.34.119.12 which is the address behind stackoverflow.com and people navigating to the domain in question would end up visualizing content from StackExchange which would be detrimental to StackExchange's image. To illustrate this, I added the entry 64.34.119.12 child-pornography.com to my /etc/hosts file and tested. Here is what I obtained: I personally found this user experience terrible as someone could think that Stack Exchange are in favor of child pornography and awaiting support from the community to create a Q&A site about it. I tested with other websites and experienced other behaviors that I would categorize as follows: 1 - Useful 404 page (happens with stackoverflow.com): For me the worst way of handling this as the image of the targeted website is directly associated with the offending domain. The more useful the 404 page, the bigger the impression that the targeted website would be willing to help with child pornography. 2 - Redirection (happens with microsoft.com): For instance when accessing child-pornography.com you get redirected to www.microsoft.com. It isn't as bad as above as the offending domain name never appears alongside the targeted website's content, but still bad in my opinion as it gives the impression the targeted website bought the offending domain and redirected it to their website to get more traffic. 3 - Server error (happens with lemonde.fr): You get an error from the webserver which page doesn't contain any content that can be associated with the targeted website (e.g. default Apache 404 page, completely blank page). I believe that is good as the identify of the targeted website isn't revealed. Above are the various behaviors I experienced, but I also thought about a fourth way of dealing with this which is described below. 4 - Disclaimer page (haven't found any website implementing that technique): Display a message such as : "You ended here because someone bought and linked the child-pornography.com domain to our website. We do not own this domain and do not associate ourselves with it. This request has been logged by our servers and we will raise this issue with the competent authorities to have this domain taken down. If you want to access our website, please click here." The good thing about this method is that it can be implemented at application layer (good if you don't have control over web server which happens with some hosting solutions), allows you to protect yourself from any liability, and offer the visitor to be redirected to your own website. Which of the above options would you implement to deal with malicious domain linking (IMO only options 3 and 4 are worth considering) ?

    Read the article

  • Multiple country-specific domains or one global domain [closed]

    - by CJM
    Possible Duplicate: How should I structure my urls for both SEO and localization? My company currently has its main (English) site on a .com domain with a .co.uk alias. In addition, we have separate sites for certain other countries - these are also hosted in the UK but are distinct sites with a country-specific domain names (.de, .fr, .se, .es), and the sites have differing amounts of distinct but overlapping content, For example, the .es site is entirely in Spanish and has a page for every section of the UK site but little else. Whereas the .de site has much more content (but still less than the UK site), in German, and geared towards our business focus in that country. The main point is that the content in the additional sites is a subset of the UK, is translated into the local language, and although sometimes is simply only a translated version of UK content, it is usually 'tweaked' for the local market, and in certain areas, contains unique content. The other sites get a fraction of the traffic of the UK site. This is perfectly understandable since the biggest chunk of work comes from the UK, and we've been established here for over 30 years. However, we are wanting to build up our overseas business and part of that is building up our websites to support this. The Question: I posed a suggestion to the business that we might consider consolidating all our websites onto the .com domain but with /en/de/fr/se/etc sections, as plenty of other companies seem to do. The theory was that the non-english sites would benefit from the greater reputation of the parent .com domain, and that all the content would be mutually supporting - my fear is that the child domains on their own are too small to compete on their own compared to competitors who are established in these countries. Speaking to an SEO consultant from my hosting company, he feels that this move would have some benefit (for the reasons mentioned), but they would likely be significantly outweighed by the loss of the benefits of localised domains. Specifically, he said that since the Panda update, and particularly the two sets of changes this year, that we would lose more than we would gain. Having done some Panda research since, I've had my eyes opened on many issues, but curiously I haven't come across much that mentions localised domain names, though I do question whether Google would see it as duplicated content. It's not that I disagree with the consultant, I just want to know more before I make recommendations to my company. What is the prevailing opinion in this case? Would I gain anything from consolidating country-specific content onto one domain? Would Google see this as duplicate content? Would there be an even greater penalty from the loss of country-specific domains? And is there anything else I can do to help support the smaller, country-specific domains?

    Read the article

  • Access Offline or Overloaded Webpages in Firefox

    - by Asian Angel
    What do you do when you really want to access a webpage only to find that it is either offline or overloaded from too much traffic? You can get access to the most recent cached version using the Resurrect Pages extension for Firefox. The Problem If you have ever encountered a website that has become overloaded and unavailable due to sudden popularity (i.e. Slashdot, Digg, etc.) then this is the result. No satisfaction to be had here… Resurrect Pages in Action Once you have installed the extension you can add the Toolbar Button if desired…it will give you the easiest access to Resurrect Pages. Or you can wait for a problem to occur when trying to access a particular website and have it appear as shown here. As you can see there is a very nice selection of cache services to choose from, therefore increasing your odds of accessing a copy of that webpage. If you would prefer to have the access attempt open in a new tab or window then you should definitely use the Toolbar Button. Clicking on the Toolbar Button will give you access to the popup window shown here…otherwise the access attempt will happen in the current tab. Here is the result for the website that we wanted to view using the Google Listing. Followed by the Google (text only) Listing. The results with the different services will depend on how recently the webpage was published/set up. View Older Versions of Currently Accessible Websites Just for fun we decided to try the extension out on the How-To Geek website to view an older version of the homepage. Using the Toolbar Button and clicking on The Internet Archive brought up the following page…we decided to try the Nov. 28, 2006 listing. As you can see things have really changed between 2006 and now…Resurrect Pages can be very useful for anyone who is interested in how websites across the web have grown and changed over the years. Conclusion If you encounter a webpage that is offline or overloaded by sudden popularity then the Resurrect Pages extension can help you get access to the information that you need using a cached version. Links Download the Resurrect Pages extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Remove Colors and Background Images in WebpagesGet Last Accessed File Time In Ubuntu LinuxCustomize the Reading Format for Webpages in FirefoxGet Access to 100+ URL Shortening Services in FirefoxAccess Cached Versions of Webpages When a Website is Down TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam

    Read the article

  • How do you automate a SharePoint 2010 deployment?

    - by Enrique Lima
    In the last couple of months SharePoint traffic (consulting, training and speaking) has picked up.  And with that also the requests for deployments.  There are good, great, bad and really bad things around this. But that is for another topic.  However part of the good and great has been the fact of organizations wanting to do a proof of concept deployment (even when WSS or MOSS has been deployed). We can go through a session (Microsoft has the SDPS concept, SharePoint Deployment Planning Services) of discovering what the customer wants to achieve from their investment in the platform and then also proceed to model the solution that would fit their needs.  But it should not stop there.  The next step should be a POC (as many have requested) to test out. Now, on to the meat of this post.  How do I deploy?  While it is a good process to watch and see all of it take place, not many have the time to sit through that.  Even more so, when that has been part of the description of deploying the platform in the sessions mentioned above. I will, though, break it into a deployment for development purposes and a deployment of a farm. Two tools (or scripts) for those two different types of deployment. First, let me address the development environment.  Around the last week in October, Chris Johnson (SharePoint Product Team) announced a SharePoint Easy Setup for Developers.  The kit itself will assist you in installing SharePoint Server (in standalone mode), the tools that go around Visual Studio, Expression Studio and the Office 2010 tools. Here is the link to Chris’ post: http://blogs.msdn.com/b/cjohnson/archive/2010/10/28/announcing-sharepoint-easy-setup-for-developers.aspx The other scenario is the use of a script in assisting you through the deployment of a farm. Now, this is not to override planning.  It should highlight the need for planning even more.  How?  Having your service accounts planned, the structure of the sites and the scale of your deployment.  Enter AutoSPInstaller.  This is a CodePlex project, and the intent behind this is not only to automate the installation but to give some meaning and get some sense out of what goes on during a SharePoint deployment. How?  Take for example the creation of the databases, when we do the initial OOB deployment by using the wizard, more times than not, we leave the names as they are.  How is that a “bad thing”?  Let’s make it a better practice to rename those Databases, and have them take on a name that is not “GUID-ized”. Having a better naming convention will not hurt, on the other hand will allow for consistency. Here is the link to AutoSPInstaller’s site on CodePlex: http://autospinstaller.codeplex.com/

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

  • Enterprise Manager 12c: New DSS Demos Available

    - by Javier Puerta
    Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade     Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! We are pleased to announce the availability of the Oracle Application Replay demo that showcases some of the key capabilities of performing realistic, production scale testing of your web and packaged Oracle applications. This demo specifically focuses on capturing production web traffic from an E-Business Suite application and replaying the captured workload on a test E-Business Suite application to assess the impact of an application infrastructure change on the workload. The target audiences are application developers, quality assurance teams, IT managers and production control staff that deal in day-to-day change management activities and trouble shooting of production environments. Demo Highlights: Enterprise Manager 12c workflows for capturing application workload Seamless integration of Application Replay with Real User Experience Insight for application workload capture Enterprise Manager 12c centralized workflows for replaying captured application workloads in a test environment Demonstrates how to minimize risk when deploying a complex EBusiness Suite application infrastructure change. Rich reporting capability for performance analysis and problem detection User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! We are pleased to announce the availability of the Oracle Real User Experience Insight demo that showcases some of the key capabilities of user experience monitoring. This demo specifically focuses on business reporting, integrated performance diagnostics, tracking of customer journey’s through RUEI’s userflow tracking capabilities and it’s Key Performance Indicators tracking and configuration. Demo Highlights: Application-centric dashboard Integration with Oracle Enterprise Manager 12c – JVMD, ADP and BTM Session diagnostics and user session replay Monitoring through “Key Performance Indicators” (KPI) --- create alerts/incidents FUSION Application centric dashboards & integrated BI Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade DSS is pleased to announce an upgrade to the Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo. While retaining the content from the initial release of the demo—Diagnostic and Tuning Packs, Test Data Management and Data Masking, and Real Application Testing—the demo now includes a new Data Masking for Real Application Testing scenario. Demo Features: Diagnostic and Tuning Packs SQL Performance Analyzer Database Replay Data Masking Masking Real Application Testing workloads Testing pending Optimizer statistics Test Data Management

    Read the article

  • At $20/month Windows Azure host my website with 99.97% uptime

    - by Gopinath
    Couple of years ago a reliable and decent performing Windows hosting was not affordable to many enthusiastic developers who want to try a startup idea or build a hobby site. I tried to start an ASP.NET website few years ago to provide services like – Mobile Tracing, Vehicle Tracing. But due to high cost of Windows hosting I developed those services using PHP (not an easy task for .NET developer) and hosted on them Linux servers.  But with recent evolution of Windows Azure, hosting ASP.NET websites on highly reliable servers is affordable. Today anyone can host a high responsive and available ASP.NET website for just $20/month using Windows Azure. My website coziie.com is running on Windows Azure and serves close to quarter millions visitors a month with 99.97% of uptime and most of the page load times are less than 3 seconds. All I spend to run this website is just around $20, if you translate it to India rupees its roughly Rs.1000. The web sever of coziie.com is powered by a single Extra Small Web role instance and the backend is powered by a SQL Azure instance. Azure is quite impressive to provide 99.97% of uptime. Response times during peak are around 3 seconds and on nomarl loads it is around 1.5 seconds. Here is the report of uptime provided by Royal Pingdom over last one year For just $20/month Windows Azure takes care of the following apart from hosting Patches up Windows OS to the latest version Upgrades ASP.NET to the latest version – coziie.com is running on ASP.NET MVC 3 and soon I’ll upgrade it to ASP.NET MVC 4 Hosts data on latest and best version Sql Server database SQL Azure maintains 3 copies of database and automatically recovers in case of server failures and disasters. I never worry about database backups/restore. Provides staging environment for deploying applications for testing purpose and move them to production – I upgrade  twice a month on average With Windows Azure I no longer focus on server maintenance or data backups. They are taken up by Microsoft team and I just focus on building my website. Wish there is a low cost Linux version of Windows Azure so that I can stop worrying about server maintenance of this blog!! If you are looking for a Windows hosting, look no further than Windows Azure. If you find $20/month is a bit expensive to start with you may explore Azure Website (sort of shared hosted environment) which is free to start with and as your traffic grows you can move to paid hosting.

    Read the article

  • Use Expressions with LINQ to Entities

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Recently I've been putting together a generic approach for paging the response from a WCF service. Paging changes the service signature, so it's not as simple as adding a behavior to an existing service in config, but the complexity of the paging is isolated in a generic base class. We're using the Entity Framework talking to SQL Server, so when we ask for a page using LINQ's .Take() method we get a nice efficient SQL query for just the rows we want, with minimal impact on SQL Server and network traffic. We use the maximum ID of the record returned as a high-water mark (rather than using .Skip() to go to the next record), so the approach caters for records being deleted between page requests. In the paged response we include a HasMorePages indicator, computed by comparing the max ID in the page of results to the max ID for the whole resultset - if the latter is bigger, then there are more pages. In some quick performance testing, the paged version of the service performed much more slowly than the unpaged version, which was unexpected. We narrowed it down to the code which gets the max ID for the full resultset - instead of building an efficient MAX() SQL query, EF was returning the whole resultset and then computing the max ID in the service layer. It's easy to reproduce - take this AdventureWorks query:             var context = new AdventureWorksEntities();             var query = from od in context.SalesOrderDetail                         where od.ModifiedDate >= modified                          && od.SalesOrderDetailID.CompareTo(id) > 0                         orderby od.SalesOrderDetailID                         select od;   We can find the maximum SalesOrderDetailID like this:             var maxIdEfficiently = query.Max(od => od.SalesOrderDetailID);   which produces our efficient MAX() SQL query. If we're doing this generically and we already have the ID function in a Func:             Func<SalesOrderDetail, int> idFunc = od => od.SalesOrderDetailID;             var maxIdInefficiently = query.Max(idFunc);   This fetches all the results from the query and then runs the Max() function in code. If you look at the difference in Reflector, the first call passes an Expression to the Max(), while the second call passes a Func. So it's an easy fix - wrap the Func in an Expression:             Expression<Func<SalesOrderDetail, int>> idExpression = od => od.SalesOrderDetailID;             var maxIdEfficientlyAgain = query.Max(idExpression);   - and we're back to running an efficient MAX() statement. Evidently the EF provider can dissect an Expression and build its equivalent in SQL, but it can't do that with Funcs.

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Setting MTU on Exalogic

    - by csoto
    For many reasons, a system administrator may want to change the MTU settings of a server. But in a system like Exalogic which contains lots of interconnected nodes and other various components, it's important to understand how this applies to the different networks. For example, when bringing up bonding of InfiniBand an error like the following may be thrown: Bringing up interface bond1: SIOCSIFMTU: Invalid argument Both scripts ifcfg-ib0 and ifcfg-ib1 (from the /etc/sysconfig/network-scripts/ direectory) have MTU set to 65500, which is a valid MTU value only if all IPoIB slaves operate in connected mode and are configured with the same value, so the line below must be added to both network scripts and then restart the network: CONNECTED_MODE=yes By the way, an error of the form “SIOCSIFMTU: Invalid argument” indicates that the requested MTU was rejected by the kernel. Typically this would be due to it exceeding the maximum value supported by the interface hardware. In that case you must either reduce the MTU to a value that is supported or obtain more capable hardware. This problem has been seen when trying to modify the MTU using the ifconfig command, like the output of the example below: [root@elxxcnxx ~]# ifconfig ib1 mtu 65520 SIOCSIFMTU: Invalid argument It's important to insist that in most cases the nodes must be rebooted after the MTU size has been changed. Although in some circumstances it may work without a reboot, it is not how it is typically documented. Now, in order to achieve a reduced memory consumption and improve performance for network traffic received on IPoIB related interfaces, it is recommend to reduce the MTU value in interface configuration files for IPoIB related bonds from 65520 to 64000. The change needs to be made to interface configuration files under the /etc/sysconfig/network-scripts directory and applies to the interface configuration files for bonds over IPoIB related slave devices, for example /etc/sysconfig/network-scripts/ifcfg-bond1. However, keep in mind that the numeric portion of the interface filenames that corresponding to IPoIB interfaces is expected to vary across compute nodes and vServers and so cannot be relied upon to identify which interface files are for bonds are over IPoIB rather than EoIB related slave interfaces. To fix these MTU values to the recommended settings, there are very useful instructions and a script on the MOS Note 1624434.1, and it's applicable physical and virtual configurations of Exalogic. Regarding the recommended MTU value for EoIB related interfaces, its maximum appropriate value is 1500. If for some reason a vServer has been created with a higher value (set on the /etc/sysconfig/network-scripts/ifcfg-bond0 file), then it must be fixed. An error like the following could be thrown under this circumstance: [root@vServer ~]# service network restart ... Bringing up interface bond0:  SIOCSIFMTU: Invalid argument Also an error like the one below can be seen on the /var/log/messages file of the vServer: kernel: T5074835532 [mlx4_vnic] eth1:vnic_change_mtu:360: failed: new_mtu 64000 2026 The MOS Note 1611657.1 is very useful for this purpose.

    Read the article

  • Oracle Application in DMZ (Demilitarized Zone)

    - by PRajkumar
     Business Needs Large Organizations want to expose their Oracle Application services outside their private network (HTTP/HTTPS and SSL). Usually these exposures must exist to promote external communication. So they want to separate an external network from directly referencing an internal network   Business Challenges ·         Business does not want to compromise with security information ·         Business cannot expose internal domain or internal URL information   Business Solution DMZ is the solution of this problem. In Oracle application we can achieve this by following way –   ·         Oracle Application consists of fleet nodes (FND_NODES) so first decide which node have to expose to public ·         To expose the node to public use the profile “Node Trust Level” ·         Set node to Public/Private (Normal -> private, External -> public) ·         Set "Responsibility Trust Level" profile to decide whether to expose Application Responsibility to inside or outside firewall         Solution Features   ·         Exposed web services can be accessed by both internal and external users ·         Configurable and can be very easily rolled out ·         Internal network and business data is secured from outside traffic ·         Unauthorized access to internal network from outside is prohibited ·         No need for VPN and Secure FTP server   Benefits  ·       Large Organizations having Oracle Application can expose their web services like (HTTP/HTTPS and SSL) to the internet without compromise with security information and without exposing their internal domain   Possible Week Points  ·         If external firewall is compromised, then external application server is also compromised, exposing an attack on E-Business Suite database ·         There’s nothing to prevent internal users from attacking internal application server, also exposing an attack on E-Business Suite database   Reference Links  ·         https://blogs.oracle.com/manojmadhusoodanan/tags/dmz

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >