Search Results

Search found 5014 results on 201 pages for 'amazon kindle fire'.

Page 193/201 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • A Good Developer is So Hard to Find

    - by James Michael Hare
    Let me start out by saying I want to damn the writers of the Toughest Developer Puzzle Ever – 2. It is eating every last shred of my free time! But as I've been churning through each puzzle and marvelling at the brain teasers and trivia within, I began to think about interviewing developers and why it seems to be so hard to find good ones.  The problem is, it seems like no matter how hard we try to find the perfect way to separate the chaff from the wheat, inevitably someone will get hired who falls far short of expectations or someone will get passed over for missing a piece of trivia or a tricky brain teaser that could have been an excellent team member.   In shops that are primarily software-producing businesses or other heavily IT-oriented businesses (Microsoft, Amazon, etc) there often exists a much tighter bond between HR and the hiring development staff because development is their life-blood. Unfortunately, many of us work in places where IT is viewed as a cost or just a means to an end. In these shops, too often, HR and development staff may work against each other due to differences in opinion as to what a good developer is or what one is worth.  It seems that if you ask two different people what makes a good developer, often you will get three different opinions.   With the exception of those shops that are purely development-centric (you guys have it much easier!), most other shops have management who have very little knowledge about the development process.  Their view can often be that development is simply a skill that one learns and then once aquired, that developer can produce widgets as good as the next like workers on an assembly-line floor.  On the other side, you have many developers that feel that software development is an art unto itself and that the ability to create the most pure design or know the most obscure of keywords or write the shortest-possible obfuscated piece of code is a good coder.  So is it a skill?  An Art?  Or something entirely in between?   Saying that software is merely a skill and one just needs to learn the syntax and tools would be akin to saying anyone who knows English and can use Word can write a 300 page book that is accurate, meaningful, and stays true to the point.  This just isn't so.  It takes more than mere skill to take words and form a sentence, join those sentences into paragraphs, and those paragraphs into a document.  I've interviewed candidates who could answer obscure syntax and keyword questions and once they were hired could not code effectively at all.  So development must be more than a skill.   But on the other end, we have art.  Is development an art?  Is our end result to produce art?  I can marvel at a piece of code -- see it as concise and beautiful -- and yet that code most perform some stated function with accuracy and efficiency and maintainability.  None of these three things have anything to do with art, per se.  Art is beauty for its own sake and is a wonderful thing.  But if you apply that same though to development it just doesn't hold.  I've had developers tell me that all that matters is the end result and how you code it is entirely part of the art and I couldn't disagree more.  Yes, the end result, the accuracy, is the prime criteria to be met.  But if code is not maintainable and efficient, it would be just as useless as a beautiful car that breaks down once a week or that gets 2 miles to the gallon.  Yes, it may work in that it moves you from point A to point B and is pretty as hell, but if it can't be maintained or is not efficient, it's not a good solution.  So development must be something less than art.   In the end, I think I feel like development is a matter of craftsmanship.  We use our tools and we use our skills and set about to construct something that satisfies a purpose and yet is also elegant and efficient.  There is skill involved, and there is an art, but really it boils down to being able to craft code.  Crafting code is far more than writing code.  Anyone can write code if they know the syntax, but so few people can actually craft code that solves a purpose and craft it well.  So this is what I want to find, I want to find code craftsman!  But how?   I used to ask coding-trivia questions a long time ago and many people still fall back on this.  The thought is that if you ask the candidate some piece of coding trivia and they know the answer it must follow that they can craft good code.  For example:   What C++ keyword can be applied to a class/struct field to allow it to be changed even from a const-instance of that class/struct?  (answer: mutable)   So what do we prove if a candidate can answer this?  Only that they know what mutable means.  One would hope that this would infer that they'd know how to use it, and more importantly when and if it should ever be used!  But it rarely does!  The problem with triva questions is that you will either: Approve a really good developer who knows what some obscure keyword is (good) Reject a really good developer who never needed to use that keyword or is too inexperienced to know how to use it (bad) Approve a really bad developer who googled "C++ Interview Questions" and studied like hell but can't craft (very bad) Many HR departments love these kind of tests because they are short and easy to defend if a legal issue arrises on hiring decisions.  After all it's easy to say a person wasn't hired because they scored 30 out of 100 on some trivia test.  But unfortunately, you've eliminated a large part of your potential developer pool and possibly hired a few duds.  There are times I've hired candidates who knew every trivia question I could throw out them and couldn't craft.  And then there are times I've interviewed candidates who failed all my trivia but who I took a chance on who were my best finds ever.    So if not trivia, then what?  Brain teasers?  The thought is, these type of questions measure the thinking power of a candidate.  The problem is, once again, you will either: Approve a good candidate who has never heard the problem and can solve it (good) Reject a good candidate who just happens not to see the "catch" because they're nervous or it may be really obscure (bad) Approve a candidate who has studied enough interview brain teasers (once again, you can google em) to recognize the "catch" or knows the answer already (bad). Once again, you're eliminating good candidates and possibly accepting bad candidates.  In these cases, I think testing someone with brain teasers only tests their ability to answer brain teasers, not the ability to craft code. So how do we measure someone's ability to craft code?  Here's a novel idea: have them code!  Give them a computer and a compiler, or a whiteboard and a pen, or paper and pencil and have them construct a piece of code.  It just makes sense that if we're going to hire someone to code we should actually watch them code.  When they're done, we can judge them on several criteria: Correctness - does the candidate's solution accurately solve the problem proposed? Accuracy - is the candidate's solution reasonably syntactically correct? Efficiency - did the candidate write or use the more efficient data structures or algorithms for the job? Maintainability - was the candidate's code free of obfuscation and clever tricks that diminish readability? Persona - are they eager and willing or aloof and egotistical?  Will they work well within your team? It may sound simple, or it may sound crazy, but when I'm looking to hire a developer, I want to see them actually develop well-crafted code.

    Read the article

  • Android - Create a custom multi-line ListView bound to an ArrayList

    - by Bill Osuch
    The Android HelloListView tutorial shows how to bind a ListView to an array of string objects, but you'll probably outgrow that pretty quickly. This post will show you how to bind the ListView to an ArrayList of custom objects, as well as create a multi-line ListView. Let's say you have some sort of search functionality that returns a list of people, along with addresses and phone numbers. We're going to display that data in three formatted lines for each result, and make it clickable. First, create your new Android project, and create two layout files. Main.xml will probably already be created by default, so paste this in: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">  <TextView   android:layout_height="wrap_content"   android:text="Custom ListView Contents"   android:gravity="center_vertical|center_horizontal"   android:layout_width="fill_parent" />   <ListView    android:id="@+id/ListView01"    android:layout_height="wrap_content"    android:layout_width="fill_parent"/> </LinearLayout> Next, create a layout file called custom_row_view.xml. This layout will be the template for each individual row in the ListView. You can use pretty much any type of layout - Relative, Table, etc., but for this we'll just use Linear: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">   <TextView android:id="@+id/name"   android:textSize="14sp"   android:textStyle="bold"   android:textColor="#FFFF00"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/cityState"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/phone"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/> </LinearLayout> Now, add an object called SearchResults. Paste this code in: public class SearchResults {  private String name = "";  private String cityState = "";  private String phone = "";  public void setName(String name) {   this.name = name;  }  public String getName() {   return name;  }  public void setCityState(String cityState) {   this.cityState = cityState;  }  public String getCityState() {   return cityState;  }  public void setPhone(String phone) {   this.phone = phone;  }  public String getPhone() {   return phone;  } } This is the class that we'll be filling with our data, and loading into an ArrayList. Next, you'll need a custom adapter. This one just extends the BaseAdapter, but you could extend the ArrayAdapter if you prefer. public class MyCustomBaseAdapter extends BaseAdapter {  private static ArrayList<SearchResults> searchArrayList;    private LayoutInflater mInflater;  public MyCustomBaseAdapter(Context context, ArrayList<SearchResults> results) {   searchArrayList = results;   mInflater = LayoutInflater.from(context);  }  public int getCount() {   return searchArrayList.size();  }  public Object getItem(int position) {   return searchArrayList.get(position);  }  public long getItemId(int position) {   return position;  }  public View getView(int position, View convertView, ViewGroup parent) {   ViewHolder holder;   if (convertView == null) {    convertView = mInflater.inflate(R.layout.custom_row_view, null);    holder = new ViewHolder();    holder.txtName = (TextView) convertView.findViewById(R.id.name);    holder.txtCityState = (TextView) convertView.findViewById(R.id.cityState);    holder.txtPhone = (TextView) convertView.findViewById(R.id.phone);    convertView.setTag(holder);   } else {    holder = (ViewHolder) convertView.getTag();   }      holder.txtName.setText(searchArrayList.get(position).getName());   holder.txtCityState.setText(searchArrayList.get(position).getCityState());   holder.txtPhone.setText(searchArrayList.get(position).getPhone());   return convertView;  }  static class ViewHolder {   TextView txtName;   TextView txtCityState;   TextView txtPhone;  } } (This is basically the same as the List14.java API demo) Finally, we'll wire it all up in the main class file: public class CustomListView extends Activity {     @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.main);                 ArrayList<SearchResults> searchResults = GetSearchResults();                 final ListView lv1 = (ListView) findViewById(R.id.ListView01);         lv1.setAdapter(new MyCustomBaseAdapter(this, searchResults));                 lv1.setOnItemClickListener(new OnItemClickListener() {          @Override          public void onItemClick(AdapterView<?> a, View v, int position, long id) {           Object o = lv1.getItemAtPosition(position);           SearchResults fullObject = (SearchResults)o;           Toast.makeText(ListViewBlogPost.this, "You have chosen: " + " " + fullObject.getName(), Toast.LENGTH_LONG).show();          }          });     }         private ArrayList<SearchResults> GetSearchResults(){      ArrayList<SearchResults> results = new ArrayList<SearchResults>();            SearchResults sr1 = new SearchResults();      sr1.setName("John Smith");      sr1.setCityState("Dallas, TX");      sr1.setPhone("214-555-1234");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Jane Doe");      sr1.setCityState("Atlanta, GA");      sr1.setPhone("469-555-2587");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Steve Young");      sr1.setCityState("Miami, FL");      sr1.setPhone("305-555-7895");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Fred Jones");      sr1.setCityState("Las Vegas, NV");      sr1.setPhone("612-555-8214");      results.add(sr1);            return results;     } } Notice that we first get an ArrayList of SearchResults objects (normally this would be from an external data source...), pass it to the custom adapter, then set up a click listener. The listener gets the item that was clicked, converts it back to a SearchResults object, and does whatever it needs to do. Fire it up in the emulator, and you should wind up with something like this:

    Read the article

  • CLR Version issues with CorBindRuntimeEx

    - by Rick Strahl
    I’m working on an older FoxPro application that’s using .NET Interop and this app loads its own copy of the .NET runtime through some of our own tools (wwDotNetBridge). This all works fine and it’s fairly straightforward to load and host the runtime and then make calls against it. I’m writing this up for myself mostly because I’ve been bitten by these issues repeatedly and spend 15 minutes each However, things get tricky when calling specific versions of the .NET runtime since .NET 4.0 has shipped. Basically we need to be able to support both .NET 2.0 and 4.0 and we’re currently doing it with the same assembly – a .NET 2.0 assembly that is the AppDomain entry point. This works as .NET 4.0 can easily host .NET 2.0 assemblies and the functionality in the 2.0 assembly provides all the features we need to call .NET 4.0 assemblies via Reflection. In wwDotnetBridge we provide a load flag that allows specification of the runtime version to use. Something like this: do wwDotNetBridge LOCAL loBridge as wwDotNetBridge loBridge = CreateObject("wwDotNetBridge","v4.0.30319") and this works just fine in most cases.  If I specify V4 internally that gets fixed up to a whole version number like “v4.0.30319” which is then actually used to host the .NET runtime. Specifically the ClrVersion setting is handled in this Win32 DLL code that handles loading the runtime for me: /// Starts up the CLR and creates a Default AppDomain DWORD WINAPI ClrLoad(char *ErrorMessage, DWORD *dwErrorSize) { if (spDefAppDomain) return 1; //Retrieve a pointer to the ICorRuntimeHost interface HRESULT hr = CorBindToRuntimeEx( ClrVersion, //Retrieve latest version by default L"wks", //Request a WorkStation build of the CLR STARTUP_LOADER_OPTIMIZATION_MULTI_DOMAIN | STARTUP_CONCURRENT_GC, CLSID_CorRuntimeHost, IID_ICorRuntimeHost, (void**)&spRuntimeHost ); if (FAILED(hr)) { *dwErrorSize = SetError(hr,ErrorMessage); return hr; } //Start the CLR hr = spRuntimeHost->Start(); if (FAILED(hr)) return hr; CComPtr<IUnknown> pUnk; WCHAR domainId[50]; swprintf(domainId,L"%s_%i",L"wwDotNetBridge",GetTickCount()); hr = spRuntimeHost->CreateDomain(domainId,NULL,&pUnk); hr = pUnk->QueryInterface(&spDefAppDomain.p); if (FAILED(hr)) return hr; return 1; } CorBindToRuntimeEx allows for a specific .NET version string to be supplied which is what I’m doing via an API call from the FoxPro code. The behavior of CorBindToRuntimeEx is a bit finicky however. The documentation states that NULL should load the latest version of the .NET runtime available on the machine – but it actually doesn’t. As far as I can see – regardless of runtime overrides even in the .config file – NULL will always load .NET 2.0 even if 4.0 is installed. <supportedRuntime> .config File Settings Things get even more unpredictable once you start adding runtime overrides into the application’s .config file. In my scenario working inside of Visual FoxPro this would be VFP9.exe.config in the FoxPro installation folder (not the current folder). If I have a specific runtime override in the .config file like this: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v2.0.50727" /> </startup> </configuration> Not surprisingly with this I can load a .NET 2.0  runtime, but I will not be able to load Version 4.0 of the .NET runtime even if I explicitly specify it in my call to ClrLoad. Worse I don’t get an error – it will just go ahead and hand me a V2 version of the runtime and assume that’s what I wanted. Yuck! However, if I set the supported runtime to V4 in the .config file: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v4.0.30319" /> </startup> </configuration> Then I can load both V4 and V2 of the runtime. Specifying NULL however will STILL only give me V2 of the runtime. Again this seems pretty inconsistent. If you’re hosting runtimes make sure you check which version of the runtime is actually loading first to ensure you get the one you’re looking for. If the wrong version loads – say 2.0 and you want 4.0 - and you then proceed to load 4.0 assemblies they will all fail to load due to version mismatches. This is how all of this started – I had a bunch of assemblies that weren’t loading and it took a while to figure out that the host was running the wrong version of the CLR and therefore caused the assemblies loading to fail. Arrggh! <supportedRuntime> and Debugger Version <supportedRuntime> also affects the use of the .NET debugger when attached to the target application. Whichever runtime is specified in the key is the version of the debugger that fires up. This can have some interesting side effects. If you load a .NET 2.0 assembly but <supportedRuntime> points at V4.0 (or vice versa) the debugger will never fire because it can only debug in the appropriate runtime version. This has bitten me on several occasions where code runs just fine but the debugger will just breeze by breakpoints without notice. The default version for the debugger is the latest version installed on the system if <supportedRuntime> is not set. Summary Besides all the hassels, I’m thankful I can build a .NET 2.0 assembly and have it host .NET 4.0 and call .NET 4.0 code. This way we’re able to ship a single assembly that provides functionality that supports both .NET 2 and 4 without having to have separate DLLs for both which would be a deployment and update nightmare. The MSDN documentation does point at newer hosting API’s specifically for .NET 4.0 which are way more complicated and even less documented but that doesn’t help here because the runtime needs to be able to host both .NET 4.0 and 2.0. Not pleased about that – the new APIs look way more complex and of course they’re not available with older versions of the runtime installed which in our case makes them useless to me in this scenario where I have to support .NET 2.0 hosting (to provide greater ‘built-in’ platform support). Once you know the behavior above, it’s manageable. However, it’s quite easy to get tripped up here because there are multiple combinations that can really screw up behaviors.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • People, Process & Engagement: WebCenter Partner Keste

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Within the WebCenter group here at Oracle, discussions about people, process and engagement cross over many vertical industries and products. Amidst our growing partner ecosystem, the community provides us insight into great customer use cases every day. Such is the case with our partner, Keste, who provides us a guest post on our blog today with an overview of their innovative solution for a customer in the transportation industry. Keste is an Oracle software solutions and development company headquartered in Dallas, Texas. As a Platinum member of the Oracle® PartnerNetwork, Keste designs, develops and deploys custom solutions that automate complex business processes. Seamless Customer Self-Service Experience in the Trucking Industry with Oracle WebCenter Portal  Keste, Oracle Platinum Partner Customer Overview Omnitracs, Inc., a Qualcomm company provides mobility solutions for trucking fleets to companies in the transportation industry. Omnitracs’ mobility services include basic communications such as text as well as advanced monitoring services such as GPS tracking, temperature tracking of perishable goods, load tracking and weighting distribution, and many others. Customer Business Needs Already the leading provider of mobility solutions for large trucking fleets, they chose to target smaller trucking fleets as new customers. However their existing high-touch customer support method would not be a cost effective or scalable method to manage and service these smaller customers. Omnitracs needed to provide several self-service features to make customer support more scalable while keeping customer satisfaction levels high and the costs manageable. The solution also had to be very intuitive and easy to use. The systems that Omnitracs sells to these trucking customers require professional installation and smaller customers need to track and schedule the installation. Information captured in Oracle eBusiness Suite needed to be readily available for new customers to track these purchases and delivery details. Omnitracs wanted a high impact User Interface to significantly improve customer experience with the ability to integrate with EBS, provisioning systems as well as CRM systems that were already implemented. Omnitracs also wanted to build an architecture platform that could potentially be extended to other Portals. Omnitracs’ stated goal was to deliver an “eBay-like” or “Amazon-like” experience for all of their customers so that they could reach a much broader market beyond their large company customer base. Solution Overview In order to manage the increased complexity, the growing support needs of global customers and improve overall product time-to-market in a cost-effective manner, IT began to deliver a self-service model. This self service model not only transformed numerous business processes but is also allowing the business to keep up with the growing demands of the (internal and external) customers. This solution was a customer service Portal that provided self service capabilities for large and small customers alike for Activation of mobility products, managing add-on applications for the devices (much like the Apple App Store), transferring services when trucks are sold to other companies as well as deactivation all without the involvement of a call service agent or sending multiple emails to different Omnitracs contacts. This is a conceptual view of the Customer Portal showing the details of the components that make up the solution. 12.00 The portal application for transactions was entirely built using ADF 11g R2. Omnitracs’ business had a pressing requirement to have a portal available 24/7 for its customers. Since there were interactions with EBS in the back-end, the downtimes on the EBS would negate this availability. Omnitracs devised a decoupling strategy at the database side for the EBS data. The decoupling of the database was done using Oracle Data Guard and completely insulated the solution from any eBusiness Suite down time. The customer has no knowledge whether eBS is running or not. Here are two sample screenshots of the portal application built in Oracle ADF. Customer Benefits The Customer Portal not only provided the scalability to grow the business but also provided the seamless integration with other disparate applications. Some of the key benefits are: Improved Customer Experience: With a modern look and feel and a Portal that has the aspects of an App Store, the customer experience was significantly improved. Page response times went from several seconds to sub-second for all of the pages. Enabled new product launches: After successfully dominating the large fleet market, Omnitracs now has a scalable solution to sell and manage smaller fleet customers giving them a huge advantage over their nearest competitors. Dozens of new customers have been acquired via this portal through an onboarding process that now takes minutes Seamless Integrations Improves Customer Support: ADF 11gR2 allowed Omnitracs to bring a diverse list of applications into one integrated solution. This provided a seamless experience for customers to route them from Marketing focused application to a customer-oriented portal. Internally, it also allowed Sales Representatives to have an integrated flow for taking a prospect through the various steps to onboard them as a customer. Key integrations included: Unity Core Salesforce.com Merchant e-Solution for credit card Custom Omnitracs Applications like CUPS and AUTO Security utilizing OID and OVD Back end integration with EBS (Data Guard) and iQ Database Business Impact Significant business impacts were realized through the launch of customer portal. It not only allows the business to push through in underserved segments, but also reduces the time it needs to spend on customer support—allowing the business to focus more on sales and identifying the market for new products. Some of the Immediate Benefits are The entire onboarding process is now completely automated and now completes in minutes. This represents an 85% productivity improvement over their previous processes. And it was 160 times faster! With the success of this self-service solution, the business is now targeting about 3X customer growth in the next five years. This represents a tripling of their overall customer base and significant downstream revenue for the ongoing services. 90%+ improvement of customer onboarding and management process by utilizing, single sign on integration using OID/OAM solution, performance improvements and new self-service functionality Unified login for all Customers, Partners and Internal Users enables login to a common portal and seamless access to all other integrated applications targeted at the respective audience Significantly improved customer experience with a better look and feel with a more user experience focused Portal screens. Helped sales of the new product by having an easy way of ordering and activating the product. Data Guard helped increase availability of the Portal to 99%+ and make it independent of EBS downtime. This gave customers the feel of high availability of the portal application. Some of the anticipated longer term Benefits are: Platform that can be leveraged to launch any new product introduction and enable all product teams to reach new customers and new markets Easy integration with content management to allow business owners more control of the product catalog Overall reduced TCO with standardization of the Oracle platform Managed IT support cost savings through optimization of technology skills needed to support and modify this solution ------------------------------------------------------------ 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

    Read the article

  • Java collision detection and player movement: tips

    - by Loris
    I have read a short guide for game develompent (java, without external libraries). I'm facing with collision detection and player (and bullets) movements. Now i put the code. Most of it is taken from the guide (should i link this guide?). I'm just trying to expand and complete it. This is the class that take care of updates movements and firing mechanism (and collision detection): public class ArenaController { private Arena arena; /** selected cell for movement */ private float targetX, targetY; /** true if droid is moving */ private boolean moving = false; /** true if droid is shooting to enemy */ private boolean shooting = false; private DroidController droidController; public ArenaController(Arena arena) { this.arena = arena; this.droidController = new DroidController(arena); } public void update(float delta) { Droid droid = arena.getDroid(); //droid movements if (moving) { droidController.moveDroid(delta, targetX, targetY); //check if arrived if (droid.getX() == targetX && droid.getY() == targetY) moving = false; } //firing mechanism if(shooting) { //stop shot if there aren't bullets if(arena.getBullets().isEmpty()) { shooting = false; } for(int i = 0; i < arena.getBullets().size(); i++) { //current bullet Bullet bullet = arena.getBullets().get(i); System.out.println(bullet.getBounds()); //angle calculation double angle = Math.atan2(bullet.getEnemyY() - bullet.getY(), bullet.getEnemyX() - bullet.getX()); //increments x and y bullet.setX((float) (bullet.getX() + (Math.cos(angle) * bullet.getSpeed() * delta))); bullet.setY((float) (bullet.getY() + (Math.sin(angle) * bullet.getSpeed() * delta))); //collision with obstacles for(int j = 0; j < arena.getObstacles().size(); j++) { Obstacle obs = arena.getObstacles().get(j); if(bullet.getBounds().intersects(obs.getBounds())) { System.out.println("Collision detect!"); arena.removeBullet(bullet); } } //collisions with enemies for(int j = 0; j < arena.getEnemies().size(); j++) { Enemy ene = arena.getEnemies().get(j); if(bullet.getBounds().intersects(ene.getBounds())) { System.out.println("Collision detect!"); arena.removeBullet(bullet); } } } } } public boolean onClick(int x, int y) { //click on empty cell if(arena.getGrid()[(int)(y / Arena.TILE)][(int)(x / Arena.TILE)] == null) { //coordinates targetX = x / Arena.TILE; targetY = y / Arena.TILE; //enables movement moving = true; return true; } //click on enemy: fire if(arena.getGrid()[(int)(y / Arena.TILE)][(int)(x / Arena.TILE)] instanceof Enemy) { //coordinates float enemyX = x / Arena.TILE; float enemyY = y / Arena.TILE; //new bullet Bullet bullet = new Bullet(); //start coordinates bullet.setX(arena.getDroid().getX()); bullet.setY(arena.getDroid().getY()); //end coordinates (enemie) bullet.setEnemyX(enemyX); bullet.setEnemyY(enemyY); //adds bullet to arena arena.addBullet(bullet); //enables shooting shooting = true; return true; } return false; } As you can see for collision detection i'm trying to use Rectangle object. Droid example: import java.awt.geom.Rectangle2D; public class Droid { private float x; private float y; private float speed = 20f; private float rotation = 0f; private float damage = 2f; public static final int DIAMETER = 32; private Rectangle2D rectangle; public Droid() { rectangle = new Rectangle2D.Float(x, y, DIAMETER, DIAMETER); } public float getX() { return x; } public void setX(float x) { this.x = x; //rectangle update rectangle.setRect(x, y, DIAMETER, DIAMETER); } public float getY() { return y; } public void setY(float y) { this.y = y; //rectangle update rectangle.setRect(x, y, DIAMETER, DIAMETER); } public float getSpeed() { return speed; } public void setSpeed(float speed) { this.speed = speed; } public float getRotation() { return rotation; } public void setRotation(float rotation) { this.rotation = rotation; } public float getDamage() { return damage; } public void setDamage(float damage) { this.damage = damage; } public Rectangle2D getRectangle() { return rectangle; } } For now, if i start the application and i try to shot to an enemy, is immediately detected a collision and the bullet is removed! Can you help me with this? If the bullet hit an enemy or an obstacle in his way, it must disappear. Ps: i know that the movements of the bullets should be managed in another class. This code is temporary. update I realized what happens, but not why. With those for loops (which checks collisions) the movements of the bullets are instantaneous instead of gradual. In addition to this, if i add the collision detection to the Droid, the method intersects returns true ALWAYS while the droid is moving! public void moveDroid(float delta, float x, float y) { Droid droid = arena.getDroid(); int bearing = 1; if (droid.getX() > x) { bearing = -1; } if (droid.getX() != x) { droid.setX(droid.getX() + bearing * droid.getSpeed() * delta); //obstacles collision detection for(Obstacle obs : arena.getObstacles()) { if(obs.getRectangle().intersects(droid.getRectangle())) { System.out.println("Collision detected"); //ALWAYS HERE } } //controlla se è arrivato if ((droid.getX() < x && bearing == -1) || (droid.getX() > x && bearing == 1)) droid.setX(x); } bearing = 1; if (droid.getY() > y) { bearing = -1; } if (droid.getY() != y) { droid.setY(droid.getY() + bearing * droid.getSpeed() * delta); if ((droid.getY() < y && bearing == -1) || (droid.getY() > y && bearing == 1)) droid.setY(y); } }

    Read the article

  • Are Chromebooks the New Netbooks, and What Does That Mean?

    - by Chris Hoffman
    Netbooks — small, cheap, slow laptops — were once very popular. They fell out of favor — people bought them because they seemed cheap and portable, but the actual experience was lackluster. Most netbooks now sit unused. Windows netbooks have vanished from stores today, but there’s a new super-cheap laptop — the Chromebook. Chromebook sales numbers are impressive, but their usage statistics tell a different story. Are Chromebooks just the new netbook? The Problem With Netbooks Netbooks seemed appealing, especially in an age before tablets and lightweight ultrabooks. You could buy a netbook for $200 or so and have a portable device that let you get on the Internet. The name “netbook” spelled that out — it was a portable device for getting on the ‘net. They weren’t really that great. The original netbook was a lightweight Asus Eee PC that ran Linux alone and had a small amount of fast flash storage. Netbooks eventually ran heavier Windows XP operating systems — Windows Vista was out, but it was just too bloated to run on netbooks. Manufacturers added slow magnetic hard drives, bloatware, and even DVD drives! They couldn’t run most Windows software very well. The build quality was poor and their keyboards were tiny and cramped. People liked the idea of a lightweight device that let them get on the Internet and loved the cheap price, but the actual experience wasn’t great. Chromebook Sales Chromebook sales numbers seem surprisingly high. NPD reported that Chromebooks were 21% of all notebooks sold in the US in 2013. If you combine laptop and tablet sales into a single statistic, Chromebooks were 9.6% of all those devices sold. That’s 2/3 as many Chromebooks sold as iPads in the US! Of Amazon’s best-selling laptop computers, two of the top three are Chromebooks. These definitely look like successful products. Unlike netbooks, Chromebooks are taking off in a big way in the education market. Many schools are buying Chromebooks for their students instead of more expensive Windows laptops. They’re easier to manage and lock down than Windows laptops, but — more importantly for cash-strapped schools — they’re very cheap. Netbooks never had this sort of momentum in schools. Chromebook Usage Statistics Here’s where the rosy picture of Chromebooks starts to become more realistic. StatCounter’s browser usage statistics show how widely used different operating systems are. For example, Windows 7 has the highest share with 35.71% of web activity in April, 2014. The chart doesn’t even show Chrome OS at all, although there is an “Other” number near the bottom. Click the Download Data link to download a CSV file and we can view more detailed information. Chrome OS only accounted for 0.38% of web usage in April, 2014. Desktop Linux, which people often shrug at, accounted for 1.52% in the same month. To its credit, Chrome OS usage has increased. Chromebooks were widely mocked back in November, 2013 when the sales numbers came out. After all, they only accounted for 0.11% of web usage globally in November, 2013! But Chrome OS numbers have been improving: Nov, 2013: 0.11% Dec, 2013: 0.22% Jan, 2014: 0.31% Feb, 2014: 0.35% Mar, 2014: 0.36% Apr, 2014: 0.38% Chrome OS is climbing, but it’s definitely still in the “Other” category. It isn’t as high as we’d expect to see it with those types of sales numbers. Chromebooks vs. Netbooks Chromebooks are more limited devices than traditional PCs. You can do quite a few things, but you have to do it all using Chrome or Chrome apps. Most people won’t be enabling developer mode and installing a Linux desktop. You don’t have access to the powerful desktop software available for Windows and even Mac OS X. On the other hand, these Chromebooks are less compromised than netbooks in many ways. They come with a lightweight operating system designed for portable, mobile devices. They don’t come packed with any bloatware, like the bloatware you’ll find on competing Windows PCs and the original netbooks. They’re cheaper because the manufacturer doesn’t have to pay for a Windows license. There’s no need for antivirus software weighing the operating system down. They’re larger than the original netbooks, with many of them being 11.6-inches instead of the original 8-inch bodies many older netbooks came with. They have larger, more comfortable keyboards and fast solid-state storage. Really, Chromebooks are what netbooks wanted to be. People didn’t buy netbooks to use typical Windows software — they just wanted a lightweight PC. Of course, for many people, the real successor to netbooks is tablets. If all you want is a portable device to throw in a bag so you can get online, maybe a tablet is better. Where Does This Leave Chromebooks? So, are Chromebooks the new netbooks? It’s a bit early to answer that question. Chromebooks are definitely not out of the competition — their sales look good and their usage share is increasing. On the other hand, Chrome OS is still pretty far behind. They’re not catching fire like tablets did. Maybe netbooks were just before their time and Chromebooks were what they were always meant to be. Just as Microsoft’s Windows XP tablets failed, Windows XP netbooks also failed. Tablets took off with a more refined operating system on better hardware years later. “Netbooks” — or Chromebooks — are now taking off with a more purpose-built operating system on better hardware, too. It’s hard to count Chromebooks out because they provide a much better experience than netbooks ever did. If you’re one of the people who wants to use old Windows desktop apps on your portable laptop, you may think netbooks were better — but most people don’t want that. But maybe people either want a full desktop PC experience or a full mobile tablet experience. Is there a place for a laptop with a keyboard that can only view websites? We’ll have to wait and see. Image Credit: Kevin Jarret on Flickr, Clive Darra on Flickr, Sean Freese on Flickr

    Read the article

  • How to ensure custom serverListener events fires before action events

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Using JavaScript in ADF Faces you can queue custom events defined by an af:serverListener tag. If the custom event however is queued from an af:clientListener on a command component, then the command component's action and action listener methods fire before the queued custom event. If you have a use case, for example in combination with client side integration of 3rd party technologies like HTML, Applets or similar, then you want to change the order of execution. The way to change the execution order is to invoke the command item action from the client event method that handles the custom event propagated by the af:serverListener tag. The following four steps ensure your successful doing this 1.       Call cancel() on the event object passed to the client JavaScript function invoked by the af:clientListener tag 2.       Call the custom event as an immediate action by setting the last argument in the custom event call to true function invokeCustomEvent(evt){   evt.cancel();          var custEvent = new AdfCustomEvent(                         evt.getSource(),                         "mycustomevent",                                                                                                                    {message:"Hello World"},                         true);    custEvent.queue(); } 3.       When handling the custom event on the server, lookup the command item, for example a button, to queue its action event. This way you simulate a user clicking the button. Use the following code ActionEvent event = new ActionEvent(component); event.setPhaseId(PhaseId.INVOKE_APPLICATION); event.queue(); The component reference needs to be changed with the handle to the command item which action method you want to execute. 4.       If the command component has behavior tags, like af:fileDownloadActionListener, or af:setPropertyListener, defined, then these are also executed when the action event is queued. However, behavior tags, like the file download action listener, may require a full page refresh to be issued to work, in which case the custom event cannot be issued as a partial refresh. File download action tag: http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_fileDownloadActionListener.html " Since file downloads must be processed with an ordinary request - not XMLHttp AJAX requests - this tag forces partialSubmit to be false on the parent component, if it supports that attribute." To issue a custom event as a non-partial submit, the previously shown sample code would need to be changed as shown below function invokeCustomEvent(evt){   evt.cancel();          var custEvent = new AdfCustomEvent(                         evt.getSource(),                         "mycustomevent",                                                                                                                    {message:"Hello World"},                         true);    custEvent.queue(false); } To learn more about custom events and the af:serverListener, please refer to the tag documentation: http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_serverListener.html

    Read the article

  • Neo4J and Azure and VS2012 and Windows 8

    - by Chris Skardon
    Now, I know that this has been written about, but both of the main places (http://www.richard-banks.org/2011/02/running-neo4j-on-azure.html and http://blog.neo4j.org/2011/02/announcing-neo4j-on-windows-azure.html) utilise VS2010, and well, I’m on VS2012 and Windows 8. Not that I think Win 8 had anything to do with it really, anyhews! I’m going to begin from the beginning, this is my first foray into running something on Azure, so it’s been a bit of a learning curve. But luckily the Neo4J guys have got us started, so let’s download the VS2010 solution: http://neo4j.org/get?file=Neo4j.Azure.Server.zip OK, the other thing we’ll need is the VS2012 Azure SDK, so let’s get that as well: http://www.windowsazure.com/en-us/develop/downloads/ (I just did the full install). Now, unzip the VS2010 solution and let’s open it in VS2012: <your location>\Neo4j.Azure.Server\Neo4j.Azure.Server.sln One-way-upgrade? Yer! Ignore the migration report – we don’t care! Let’s build that sucker… Ahhh 14 errors… WindowsAzure does not exist in the namespace ‘Microsoft’ Not a problem right? We’ve installed the SDK, just need to update the references: We can ignore the Test projects, they don’t use Azure, we’re interested in the other projects, so what we’ll do is remove the broken references, and add the correct ones, so expand the references bit of each project: hunt out those yellow exclamation marks, and delete them! You’ll need to add the right ones back in (listed below), when you go to the ‘Add Reference’ dialog make sure you have ‘Assemblies’ and ‘Framework’ selected before you seach (and search for ‘microsoft.win’ to narrow it down) So the references you need for each project are: CollectDiagnosticsData Microsoft.WindowsAzure.Diagnostics Microsoft.WindowsAzure.StorageClient Diversify.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.CloudDrive Microsoft.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.StorageClient Right, so let’s build again… Sweet! No errors.   Now we need to setup our Blobs, I’m assuming you are using the most up-to-date Java you happened to have downloaded :) in my case that’s JRE7, and that is located in: C:\Program Files (x86)\Java\jre7 So, zip up that folder into whatever you want to call it, I went with jre7.zip, and stuck it in a temp folder for now. In that same temp folder I also copied the neo4j zip I was using: neo4j-community-1.7.2-windows.zip OK, now, we need to get these into our Blob storage, this is where a lot of stuff becomes unstuck - I didn’t find any applications that helped me use the blob storage, one would crash (because my internet speed is so slow) and the other just didn’t work – sure it looked like it had worked, but when push came to shove it didn’t. So this is how I got my files into Blob (local first): 1. Run the ‘Storage Emulator’ (just search for that in the start menu) 2. That takes a little while to start up so fire up another instance of Visual Studio in the mean time, and create a new Console Application. 3. Manage Nuget Packages for that solution and add ‘Windows Azure Storage’ Now you’re set up to add the code: public static void Main() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.DevelopmentStorageAccount; CloudBlobClient client = cloudStorageAccount.CreateCloudBlobClient(); client.Timeout = TimeSpan.FromMinutes(30); CloudBlobContainer container = client.GetContainerReference("neo4j"); //This will create it as well   UploadBlob(container, "jre7.zip", "c:\\temp\\jre7.zip"); UploadBlob(container, "neo4j-community-1.7.2-windows.zip", "c:\\temp\\neo4j-community-1.7.2-windows.zip"); }   private static void UploadBlob(CloudBlobContainer container, string blobName, string filename) { CloudBlob blob = container.GetBlobReference(blobName);   using (FileStream fileStream = File.OpenRead(filename)) blob.UploadFromStream(fileStream); } This will upload the files to your local storage account (to switch to an Azure one, you’ll need to create a storage account, and use those credentials when you make your CloudStorageAccount above) To test you’ve got them uploaded correctly, go to: http://localhost:10000/devstoreaccount1/neo4j/jre7.zip and you will hopefully download the zip file you just uploaded. Now that those files are there, we are ready for some final configuration… Right click on the Neo4jServerHost role in the Neo4j.Azure.Server cloud project: Click on the ‘Settings’ tab and we’ll need to do some changes – by default, the 1.7.2 edition of neo4J unzips to: neo4j-community-1.7.2 So, we need to update all the ‘neo4j-1.3.M02’ directories to be ‘neo4j-community-1.7.2’, we also need to update the Java runtime location, so we start with this: and end with this: Now, I also changed the Endpoints settings, to be HTTP (from TCP) and to have a port of 7410 (mainly because that’s straight down on the numpad) The last ‘gotcha’ is some hard coded consts, which had me looking for ages, they are in the ‘ConfigSettings’ class of the ‘Neo4jServerHost’ project, and the ones we’re interested in are: Neo4jFileName JavaZipFileName Change those both to what that should be. OK Nearly there (I promise)! Run the ‘Compute Emulator’ (same deal with the Start menu), in your system tray you should have an Azure icon, when the compute emulator is up and running, right click on the icon and select ‘Show Compute Emulator UI’ The last steps! Make sure the ‘Neo4j.Azure.Server’ cloud project is set up as the start project and let’s hit F5 tension mounts, the build takes place (you need to accept the UAC warning) and VS does it’s stuff. If you look at the Compute Emulator UI you’ll see some log stuff (which you’ll need if this goes awry – but it won’t don’t worry!) In a bit, the console and a Java window will pop up: Then the console will bog off, leaving just the Java one, and if we switch back to the Compute Emulator UI and scroll up we should be able to see a line telling us the port number we’ve been assigned (in my case 7411): (If you can’t see it, don’t worry.. press CTRL+A on the emulator, then CTRL+C, copy all the text and paste it into something like Notepad, then just do a Find for ‘port’ you’ll soon see it) Go to your favourite browser, and head to: http://localhost:YOURPORT/ and you should see the WebAdmin! See you on the cloud side hopefully! Chris PS Other gotchas! OK, I’ve been caught out a couple of times: I had an instance of Neo4J running as a service on my machine, the Azure instance wanted to run the https version of the server on the same port as the Service was running on, and so Java would complain that the port was already in use.. The first time I converted the project, it didn’t update the version of the Azure library to load, in the App.Config of the Neo4jServerHost project, and VS would throw an exception saying it couldn’t find the Azure dll version 1.0.0.0.

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • Why you need to learn async in .NET

    - by PSteele
    I had an opportunity to teach a quick class yesterday about what’s new in .NET 4.0.  One of the topics was the TPL (Task Parallel Library) and how it can make async programming easier.  I also stressed that this is the direction Microsoft is going with for C# 5.0 and learning the TPL will greatly benefit their understanding of the new async stuff.  We had a little time left over and I was able to show some code that uses the Async CTP to accomplish some stuff, but it wasn’t a simple demo that you could jump in to and understand so I thought I’d thrown one together and put it in a blog post. The entire solution file with all of the sample projects is located here. A Simple Example Let’s start with a super-simple example (WindowsApplication01 in the solution). I’ve got a form that displays a label and a button.  When the user clicks the button, I want to start displaying the current time for 15 seconds and then stop. What I’d like to write is this: lblTime.ForeColor = Color.Red; for (var x = 0; x < 15; x++) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); Thread.Sleep(1000); } lblTime.ForeColor = SystemColors.ControlText; (Note that I also changed the label’s color while counting – not quite an ILM-level effect, but it adds something to the demo!) As I’m sure most of my readers are aware, you can’t write WinForms code this way.  WinForms apps, by default, only have one thread running and it’s main job is to process messages from the windows message pump (for a more thorough explanation, see my Visual Studio Magazine article on multithreading in WinForms).  If you put a Thread.Sleep in the middle of that code, your UI will be locked up and unresponsive for those 15 seconds.  Not a good UX and something that needs to be fixed.  Sure, I could throw an “Application.DoEvents()” in there, but that’s hacky. The Windows Timer Then I think, “I can solve that.  I’ll use the Windows Timer to handle the timing in the background and simply notify me when the time has changed”.  Let’s see how I could accomplish this with a Windows timer (WindowsApplication02 in the solution): public partial class Form1 : Form { private readonly Timer clockTimer; private int counter;   public Form1() { InitializeComponent(); clockTimer = new Timer {Interval = 1000}; clockTimer.Tick += UpdateLabel; }   private void UpdateLabel(object sender, EventArgs e) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); counter++; if (counter == 15) { clockTimer.Enabled = false; lblTime.ForeColor = SystemColors.ControlText; } }   private void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; counter = 0; clockTimer.Start(); } } Holy cow – things got pretty complicated here.  I use the timer to fire off a Tick event every second.  Inside there, I can update the label.  Granted, I can’t use a simple for/loop and have to maintain a global counter for the number of iterations.  And my “end” code (when the loop is finished) is now buried inside the bottom of the Tick event (inside an “if” statement).  I do, however, get a responsive application that doesn’t hang or stop repainting while the 15 seconds are ticking away. But doesn’t .NET have something that makes background processing easier? The BackgroundWorker Next I try .NET’s BackgroundWorker component – it’s specifically designed to do processing in a background thread (leaving the UI thread free to process the windows message pump) and allows updates to be performed on the main UI thread (WindowsApplication03 in the solution): public partial class Form1 : Form { private readonly BackgroundWorker worker;   public Form1() { InitializeComponent(); worker = new BackgroundWorker {WorkerReportsProgress = true}; worker.DoWork += StartUpdating; worker.ProgressChanged += UpdateLabel; worker.RunWorkerCompleted += ResetLabelColor; }   private void StartUpdating(object sender, DoWorkEventArgs e) { var workerObject = (BackgroundWorker) sender; for (int x = 0; x < 15; x++) { workerObject.ReportProgress(0); Thread.Sleep(1000); } }   private void UpdateLabel(object sender, ProgressChangedEventArgs e) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); }   private void ResetLabelColor(object sender, RunWorkerCompletedEventArgs e) { lblTime.ForeColor = SystemColors.ControlText; }   private void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; worker.RunWorkerAsync(); } } Well, this got a little better (I think).  At least I now have my simple for/next loop back.  Unfortunately, I’m still dealing with event handlers spread throughout my code to co-ordinate all of this stuff in the right order. Time to look into the future. The async way Using the Async CTP, I can go back to much simpler code (WindowsApplication04 in the solution): private async void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; for (var x = 0; x < 15; x++) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); await TaskEx.Delay(1000); } lblTime.ForeColor = SystemColors.ControlText; } This code will run just like the Timer or BackgroundWorker versions – fully responsive during the updates – yet is way easier to implement.  In fact, it’s almost a line-for-line copy of the original version of this code.  All of the async plumbing is handled by the compiler and the framework.  My code goes back to representing the “what” of what I want to do, not the “how”. I urge you to download the Async CTP.  All you need is .NET 4.0 and Visual Studio 2010 sp1 – no need to set up a virtual machine with the VS2011 beta (unless, of course, you want to dive right in to the C# 5.0 stuff!).  Starting playing around with this today and see how much easier it will be in the future to write async-enabled applications.

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • Conducting Effective Web Meetings

    - by BuckWoody
    There are several forms of corporate communication. From immediate, rich communications like phones and IM messaging to historical transactions like e-mail, there are a lot of ways to get information to one or more people. From time to time, it's even useful to have a meeting. (This is where a witty picture of a guy sleeping in a meeting goes. I won't bother actually putting one here; you're already envisioning it in your mind) Most meetings are pointless, and a complete waste of time. This is the fault, completely and solely, of the organizer. It's because he or she hasn't thought things through enough to think about alternate forms of information passing. Here's the criteria for a good meeting - whether in-person or over the web: 100% of the content of a meeting should require the participation of 100% of the attendees for 100% of the time It doesn't get any simpler than that. If it doesn't meet that criteria, then don't invite that person to that meeting. If you're just conveying information and no one has the need for immediate interaction with that information (like telling you something that modifies the message), then send an e-mail. If you're a manager, and you need to get status from lots of people, pick up the phone.If you need a quick answer, use IM. I once had a high-level manager that called frequent meetings. His real need was status updates on various processes, so 50 of us would sit in a room while he asked each one of us questions. He believed this larger meeting helped us "cross pollinate ideas". In fact, it was a complete waste of time for most everyone, except in the one or two moments that they interacted with him. So I wrote some code for a Palm Pilot (which was a kind of SmartPhone but with no phone and no real graphics, but this was in the days when we had just discovered fire and the wheel, although the order of those things is still in debate) that took an average of the salaries of the people in the room (I guessed at it) and ran a timer which multiplied the number of people against the salaries. I left that running in plain sight for him, and when he asked about it, I explained how much the meetings were really costing the company. We had far fewer meetings after. Meetings are now web-enabled. I believe that's largely a good thing, since it saves on travel time and allows more people to participate, but I think the rule above still holds. And in fact, there are some other rules that you should follow to have a great meeting - and fewer of them. Be Clear About the Goal This is important in any meeting, but all of us have probably gotten an invite with a web link and an ambiguous title. Then you get to the meeting, and it's a 500-level deep-dive on something everyone expects you to know. This is unfair to the "expert" and to the participants. I always tell people that invite me to a meeting that I will be as detailed as I can - but the more detail they can tell me about the questions, the more detailed I can be in my responses. Granted, there are times when you don't know what you don't know, but the more you can say about the topic the better. There's another point here - and it's that you should have a clearly defined "win" for the meeting. When the meeting is over, and everyone goes back to work, what were you expecting them to do with the information? Have that clearly defined in your head, and in the meeting invite. Understand the Technology There are several web-meeting clients out there. I use them all, since I meet with clients all over the world. They all work differently - so I take a few moments and read up on the different clients and find out how I can use the tools properly. I do this with the technology I use for everything else, and it's important to understand it if the meeting is to be a success. If you're running the meeting, know the tools. I don't care if you like the tools or not, learn them anyway. Don't waste everyone else's time just because you're too bitter/snarky/lazy to spend a few minutes reading. Check your phone or mic. Check your video size. Install (and learn to use)  ZoomIT (http://technet.microsoft.com/en-us/sysinternals/bb897434.aspx). Format your slides or screen or output correctly. Learn to use the voting features of the meeting software, and especially it's whiteboard features. Figure out how multiple monitors work. Try a quick meeting with someone to test all this. Do this *before* you invite lots of other people to your meeting.   Use a WebCam I'm not a pretty man. I have a face fit for radio. But after attending a meeting with clients where one Microsoft person used a webcam and another did not, I'm convinced that people pay more attention when a face is involved. There are tons of studies around this, or you can take my word for it, but toss a shirt on over those pajamas and turn the webcam on. Set Up Early Whether you're attending or leading the meeting, don't wait to sign on to the meeting at the time when it starts. I can almost plan that a 10:00 meeting will actually start at 10:10 because the participants/leader is just now installing the web client for the meeting at 10:00. Sign on early, go on mute, and then wait for everyone to arrive. Mute When Not Talking No one wants to hear your screaming offspring / yappy dog / other cubicle conversations / car wind noise (are you driving in a desert storm or something?) while the person leading the meeting is trying to talk. I use the Lync software from Microsoft for my meetings, and I mute everyone by default, and then tell them to un-mute to talk to the group. Share Collateral If you have a PowerPoint deck, mail it out in case you have a tech failure. If you have a document, share it as an attachment to the meeting. Don't make people ask you for the information - that's why you're there to begin with. Even better, send it out early. "But", you say, "then no one will come to the meeting if they have the deck first!" Uhm, then don't have a meeting. Send out the deck and a quick e-mail and let everyone get on with their productive day. Set Actions At the Meeting A meeting should have some sort of outcome (see point one). That means there are actions to take, a follow up, or some deliverable. Otherwise, it's an e-mail. At the meeting, decide who will do what, when things are needed, and so on. And avoid, if at all possible, setting up another meeting, unless absolutely necessary. So there you have it. Whether it's on-premises or on the web, meetings are a necessary evil, and should be treated that way. Like politicians, you should have as few of them as are necessary to keep the roads paved and public libraries open.

    Read the article

  • First PC Build (Part 1)

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/08/05/157959.aspxA couple of months ago I made the decision to build myself a new computer. The intended use is gaming and for using the last real version of Photoshop. I was motivated by the poor state of console gaming and a simple desire to do something I haven’t done before – build a PC from the ground up. I’ve been using PCs for more than two decades. I’ve replaced a component hear and there, but for the last 10 years or so I’ve only used laptops. Therefore, this article will be written from the perspective of someone familiar with PCs, but completely new at building. I’m not an expert and this is not a definitive guide for building a PC, but I do hope that it encourages you to try it yourself. Component List Research There was a lot of research necessary, because building a PC is completely new to me, and I haven’t kept up with what’s out there. The first thing you want to do is nail down what your goals are. Your goals are going to be driven by what you want to do with your computer and personal choice. Don’t neglect the second one, because if you’re doing this for fun you want to get what you want. In my case, I focused on three things: performance, longevity, and aesthetics. The performance aspect is important for gaming and Photoshop. This will drive what components you get. For example, heavy gaming use is going to drive your choice of graphics card. Longevity is relevant to me, because I don’t want to be changing things out anytime soon for the next hot game. The consequence of performance and longevity is cost. Finally, aesthetics was my next consideration. I could have just built a box, but it wouldn’t have been nearly as fun for me. Aesthetics might not be important to you. They are for me. I also like gadgets and that played into at least one purchase for this build. I used PC Part Picker to put together my component list. I found it invaluable during the process and I’d recommend it to everyone. One caveat is that I wouldn’t trust the compatibility aspects. It does a pretty good job of not steering you wrong, but do your own research. The rest of it isn’t really sexy. I started out with what appealed to me and then I made changes and additions as I dived deep into researching each component and interaction I could find. The resources I used are innumerable. I used reviews, product descriptions, forum posts (praises and problems), et al. to assist me. I also asked friends into gaming what they thought about my component list. And when I got near the end I posted my list to the Reddit /r/buildapc forum. I cannot stress the value of extra sets of eyeballs and first hand experiences. Some of the resources I used: PC Part Picker Tom’s Hardware bit-tech Reddit Purchase PC Part Picker favors certain vendors. You should look at others too. In my case I found their favorites to be the best. My priorities were out-the-door price and shipping time. I knew that once I started getting parts I’d want to start building. Luckily, I timed it well and everything arrived within the span of a few days. Here are my opinions on the vendors I ended up using in alphabetical order. Amazon.com is a good, reliable choice. They have excellent customer service in my experience, and I knew I wouldn’t have trouble with them. However, shipping time is often a problem when you use their free shipping unless you order expensive items (I’ve found items over $100 ship quickly). Ultimately though, price wasn’t always the best and their collection of sales tax in my state turned me off them. I did purchase my case from them. I ordered the mouse as well, but I cancelled after it was stuck four days in a “shipping soon” state. I purchased the mouse locally. Best Buy is not my favorite place to do business. There’s a lot of history with poor, uninterested sales representatives and they used to have a lot of bad anti-consumer policies. That’s a lot better now, but the bad taste is still in my mouth. I ended up purchasing the accessories from them including mouse (locally) and headphones. NCIX is a company that I’ve never heard of before. It popped up as a recommendation for my CPU cooler on PC Part Picker. I didn’t do a lot of research on the company, because their policy on you buying insurance for your orders turned me off. That policy makes it clear to me that the company finds me responsible for the shipment once it leaves their dock. That’s not right, and may run afoul of state laws. Regardless they shipped my CPU cooler quickly and I didn’t have a problem. NewEgg.com is a well known company. I had never done business with them, but I’m glad I did. They shipped quickly and provided good visibility over everything. The prices were also the best in most cases. My main complaint is that they have a lot of exchange only return policies on components. To their credit those policies are listed in the cart underneath each item. The visibility tells me that they’re not playing any shenanigans and made me comfortable dealing with that risk. The vast majority of what I ordered came from them. Coming Next In the next part I’ll tackle my build experience.

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • Delight and Excite

    - by Applications User Experience
    Mick McGee, CEO & President, EchoUser Editor’s Note: EchoUser is a User Experience design firm in San Francisco and a member of the Oracle Usability Advisory Board. Mick and his staff regularly consult on Oracle Applications UX projects. Being part of a user experience design firm, we have the luxury of working with a lot of great people across many great companies. We get to help people solve their problems.  At least we used to. The basic design challenge is still the same; however, the goal is not necessarily to solve “problems” anymore; it is, “I want our products to delight and excite!” The question for us as UX professionals is how to design to those goals, and then how to assess them from a usability perspective. I’m not sure where I first heard “delight and excite” (A book? blog post? Facebook  status? Steve Jobs quote?), but now I hear these listed as user experience goals all the time. In particular, somewhat paradoxically, I routinely hear them in enterprise software conversations. And when asking these same enterprise companies what will make the project successful, we very often hear, “Make it like Apple.” In past days, it was “make it like Yahoo (or Amazon or Google“) but now Apple is the common benchmark. Steve Jobs and Apple were not secrets, but with Jobs’ passing and Apple becoming the world’s most valuable company in the last year, the impact of great design and experience is suddenly very widespread. In particular, users’ expectations have gone way up. Being an enterprise company is no shield to the general expectations that users now have, for all products. Designing a “Minimum Viable Product” The user experience challenge has historically been, to echo the words of Eric Ries (author of Lean Startup) , to create a “minimum viable product”: the proverbial, “make it good enough”. But, in our profession, the “minimum viable” part of that phrase has oftentimes, unfortunately, referred to the design and user experience. Technology typically dominated the focus of the biggest, most successful companies. Few have had the laser focus of Apple to also create and sell design and user experience alongside great technology. But now that Apple is the most valuable company in the world, copying their success is a common undertaking. Great design is now a premium offering that everyone wants, from the one-person startup to the largest companies, consumer and enterprise. This emerging business paradigm will have significant impact across the user experience design process and profession. One area that particularly interests me is, how are we going to evaluate these new emerging “delight and excite” experiences, which are further customized to each particular domain? How to Measure “Delight and Excite” Traditional usability measures of task completion rate, assists, time, and errors are still extremely useful in many situations; however, they are too blunt to offer much insight into emerging experiences “Satisfaction” is usually assessed in user testing, in roughly equivalent importance to the above objective metrics. Various surveys and scales have provided ways to measure satisfying UX, with whatever questions they include. However, to meet the demands of new business goals and keep users at the center of design and development processes, we have to explore new methods to better capture custom-experience goals and emotion-driven user responses. We have had success assessing custom experiences, including “delight and excite”, by employing a variety of user testing methods that tend to combine formative and summative techniques (formative being focused more on identifying usability issues and ways to improve design, and summative focused more on metrics). Our most successful tool has been one we’ve been using for a long time, Magnitude Estimation Technique (MET). But it’s not necessarily about MET as a measure, rather how it is created. Caption: For one client, EchoUser did two rounds of testing.  Each test was a mix of performing representative tasks and gathering qualitative impressions. Each user participated in an in-person moderated 1-on-1 session for 1 hour, using a testing set-up where they held the phone. The primary goal was to identify usability issues and recommend design improvements. MET is based on a definition of the desired experience, which users will then use to rate items of interest (usually tasks in a usability test). In other words, a custom experience definition needs to be created. This can then be used to measure satisfaction in accomplishing tasks; “delight and excite”; or anything else from strategic goals, user demands, or elsewhere. For reference, our standard MET definition in usability testing is: “User experience is your perception of how easy to use, well designed and productive an interface is to complete tasks.” Articulating the User Experience We’ve helped construct experience definitions for several clients to better match their business goals. One example is a modification of the above that was needed for a company that makes medical-related products: “User experience is your perception of how easy to use, well-designed, productive and safe an interface is for conducting tasks. ‘Safe’ is how free an environment (including devices, software, facilities, people, etc.) is from danger, risk, and injury.” Another example is from a company that is pushing hard to incorporate “delight” into their enterprise business line: “User experience is your perception of a product’s ease of use and learning, satisfaction and delight in design, and ability to accomplish objectives.” I find the last one particularly compelling in that there is little that identifies the experience as being for a highly technical enterprise application. That definition could easily be applied to any number of consumer products. We have gone further than the above, including “sexy” and “cool” where decision-makers insisted they were part of the desired experience. We also applied it to completely different experiences where the “interface” was, for example, riding public transit, the “tasks” were train rides, and we followed the participants through the train-riding journey and rated various aspects accordingly: “A good public transportation experience is a cost-effective way of reliably, conveniently, and safely getting me to my intended destination on time.” To construct these definitions, we’ve employed both bottom-up and top-down approaches, depending on circumstances. For bottom-up, user inputs help dictate the terms that best fit the desired experience (usually by way of cluster and factor analysis). Top-down depends on strategic, visionary goals expressed by upper management that we then attempt to integrate into product development (e.g., “delight and excite”). We like a combination of both approaches to push the innovation envelope, but still be mindful of current user concerns. Hopefully the idea of crafting your own custom experience, and a way to measure it, can provide you with some ideas how you can adapt your user experience needs to whatever company you are in. Whether product-development or service-oriented, nearly every company is ultimately providing a user experience. The Bottom Line Creating great experiences may have been popularized by Steve Jobs and Apple, but I’ll be honest, it’s a good feeling to be moving from “good enough” to “delight and excite,” despite the challenge that entails. In fact, it’s because of that challenge that we will expand what we do as UX professionals to help deliver and assess those experiences. I’m excited to see how we, Oracle, and the rest of the industry will live up to that challenge.

    Read the article

  • Screenshot Tour: Ubuntu Touch 14.04 on a Nexus 7

    - by Chris Hoffman
    Ubuntu 14.04 LTS will “form the basis of the first commercially available Ubuntu tablets,” according to Canonical. We installed Ubuntu Touch 14.04 on our own hardware to see what those tablets will be like. We don’t recommend installing this yourself, as it’s still not a polished, complete experience. We’re using “Ubuntu Touch” as shorthand here — apparently this project’s new name is “Ubuntu For Devices.” The Welcome Screen Ubuntu’s touch interface is all about edge swipes and hidden interface elements — it has a lot in common with Windows 8, actually. You’ll see the welcome screen when you boot up or unlock a Ubuntu tablet or phone. If you have new emails, text messages, or other information, it will appear on this screen along with the time and date. If you don’t, you’ll just see a message saying “No data sources available.” The Dash Swipe in from the right edge of the welcome screen to access the Dash, or home screen. This is actually very similar to the Dash on Ubuntu’s Unity desktop. This isn’t a surprise — Canonical wants the desktop and touch versions of Ubuntu to use the same code. In the future, the desktop and touch versions of Ubuntu will use the same version of Unity and Unity will adjust its interface depending on what type of device your’e using. Here you’ll find apps you have installed and apps available to install. Tap an installed app to launch it or tap an available app to view more details and install it. Tap the My apps or Available headings to view a complete list of apps you have installed or apps you can install. Tap the Search box at the top of the screen to start searching — this is how you’d search for new apps to install. As you’d expect, a touch keyboard appears when you tap in the Search field or any other text field. The launcher isn’t just for apps. Tap the Apps heading at the top of the screen and you’ll see hidden text appear — Music, Video, and Scopes. This hidden navigation is used throughout Ubuntu’s different apps and can be easy to miss at first. Swipe to the left or right to move between these screens. These screens are also similar to the different panels in Unity on the desktop. The Scopes section allows you to view different search scopes you have installed. These are used to search different sources when you start a search from the Dash. Search from the Music or Videos scopes to search for local media files on your device or media files online. For example, searching in the Music scope will show you music results from Grooveshark by default. Navigating Ubuntu Touch Swipe in from the left edge anywhere on the system to open the launcher, a bar with shortcuts to apps. This launcher is very similar to the launcher on the left of Ubuntu’s Unity desktop — that’s the whole idea, after all. Once you’ve opened an app, you can leave the app by swiping in from the left. The launcher will appear — keep moving your finger towards the right edge of teh screen. This will swipe the current app off the screen, taking you back to the Dash. Once back on the Dash, you’ll see your open apps represented as thumbnails under Recent. Tap a thumbnail here to go back to a running app. To remove an app from here, long-press it and tap the X button that appears. Swipe in from the right edge in any app to quickly switch between recent apps. Swipe in from the right edge and hold your finger down to reveal an application switcher that shows all your recent apps and lets you choose between them. Swipe down from the top of the screen to access the indicator panel. Here you can connect to Wi-Fi networks, view upcoming events, control GPS and Bluetooth hardware, adjust sound settings, see incoming messages, and more. This panel is for quick access to hardware settings and notifications, just like the indicators on Ubuntu’s Unity desktop. The Apps System settings not included in the pull-down panel are available in the System Settings app. To access it, tap My apps on the Dash and tap System Settings, search for the System Settings app, or open the launcher bar and tap the settings icon. The settings here a bit limited compared to other operating systems, but many of the important options are available here. You can add Evernote, Ubuntu One, Twitter, Facebook, and Google accounts from here. A free Ubuntu One account is mandatory for downloading and updating apps. A Google account can be used to sync contacts and calendar events. Some apps on Ubuntu are native apps, while many are web apps. For example, the Twitter, Gmail, Amazon, Facebook, and eBay apps included by default are all web apps that open each service’s mobile website as an app. Other applications, such as the Weather, Calendar, Dialer, Calculator, and Notes apps are native applications. Theoretically, both types of apps will be able to scale to different screen resolutions. Ubuntu Touch and Ubuntu desktop may one day share the same apps, which will adapt to different display sizes and input methods. Like Windows 8 apps, Ubuntu apps hide interface elements by default, providing you with a full-screen view of the content. Swipe up from the bottom of an app’s screen to view its interface elements. For example, swiping up from the bottom of the Web Browser app reveals Back, Forward, and Refresh buttons, along with an address bar and Activity button so you can view current and recent web pages. Swipe up even more from the bottom and you’ll see a button hovering in the middle of the app. Tap the button and you’ll see many more settings. This is an overflow area for application options and functions that can’t fit on the navigation bar. The Terminal app has a few surprising Easter eggs in this panel, including a “Hack into the NSA” option. Tap it and the following text will appear in the terminal: That’s not very nice, now tracing your location . . . . . . . . . . . .Trace failed You got away this time, but don’t try again. We’d expect to see such Easter eggs disappear before Ubuntu Touch actually ships on real devices. Ubuntu Touch has come a long way, but it’s still not something you want to use today. For example, it doesn’t even have a built-in email client — you’ll have to us your email service’s mobile website. Few apps are available, and many of the ones that are are just mobile websites. It’s not a polished operating system intended for normal users yet — it’s more of a preview for developers and device manufacturers. If you really want to try it yourself, you can install it on a Wi-Fi Nexus 7 (2013), Nexus 10, or Nexus 4 device. Follow Ubuntu’s installation instructions here.

    Read the article

  • SharePoint logging to a list

    - by Norgean
    I recently worked in an environment with several servers. Locating the correct SharePoint log file for error messages, or development trace calls, is cumbersome. And once the solution hit the cloud, it got even worse, as we had no access to the log files at all. Obviously we are not the only ones with this problem, and the current trend seems to be to log to a list. This had become an off-hour project, so rather than do the sensible thing and find a ready-made solution, I decided to do it the hard way. So! Fire up Visual Studio, create yet another empty SharePoint solution, and start to think of some requirements. Easy on/offI want to be able to turn list-logging on and off.Easy loggingFor me, this means being able to use string.Format.Easy filteringLet's have the possibility to add some filtering columns; category and severity, where severity can be "verbose", "warning" or "error". Easy on/off Well, that's easy. Create a new web feature. Add an event receiver, and create the list on activation of the feature. Tear the list down on de-activation. I chose not to create a new content type; I did not feel that it would give me anything extra. I based the list on the generic list - I think a better choice would have been the announcement type. Approximately: public void CreateLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListName);             if (list == null)             {                 var listGuid = web.Lists.Add(LogListName, "Logging for the masses", SPListTemplateType.GenericList);                 list = web.Lists[listGuid];                 list.Title = LogListTitle;                 list.Update();                 list.Fields.Add(Category, SPFieldType.Text, false);                 var stringColl = new StringCollection();                 stringColl.AddRange(new[]{Error, Information, Verbose});                 list.Fields.Add(Severity, SPFieldType.Choice, true, false, stringColl);                 ModifyDefaultView(list);             }         }Should be self explanatory, but: only create the list if it does not already exist (d'oh). Best practice: create it with a Url-friendly name, and, if necessary, give it a better title. ...because otherwise you'll have to look for a list with a name like "Simple_x0020_Log". I've added a couple of fields; a field for category, and a 'severity'. Both to make it easier to find relevant log messages. Notice that I don't have to call list.Update() after adding the fields - this would cause a nasty error (something along the lines of "List locked by another user"). The function for deleting the log is exactly as onerous as you'd expect:         public void DeleteLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListTitle);             if (list != null)             {                 list.Delete();             }         } So! "All" that remains is to log. Also known as adding items to a list. Lots of different methods with different signatures end up calling the same function. For example, LogVerbose(web, message) calls LogVerbose(web, null, message) which again calls another method which calls: private static void Log(SPWeb web, string category, string severity, string textformat, params object[] texts)         {             if (web != null)             {                 var list = web.Lists.TryGetList(LogListTitle);                 if (list != null)                 {                     var item = list.AddItem(); // NOTE! NOT list.Items.Add… just don't, mkay?                     var text = string.Format(textformat, texts);                     if (text.Length > 255) // because the title field only holds so many chars. Sigh.                         text = text.Substring(0, 254);                     item[SPBuiltInFieldId.Title] = text;                     item[Degree] = severity;                     item[Category] = category;                     item.Update();                 }             } // omitted: Also log to SharePoint log.         } By adding a params parameter I can call it as if I was doing a Console.WriteLine: LogVerbose(web, "demo", "{0} {1}{2}", "hello", "world", '!'); Ok, that was a silly example, a better one might be: LogError(web, LogCategory, "Exception caught when updating {0}. exception: {1}", listItem.Title, ex); For performance reasons I use list.AddItem rather than list.Items.Add. For completeness' sake, let us include the "ModifyDefaultView" function that I deliberately skipped earlier.         private void ModifyDefaultView(SPList list)         {             // Add fields to default view             var defaultView = list.DefaultView;             var exists = defaultView.ViewFields.Cast<string>().Any(field => String.CompareOrdinal(field, Severity) == 0);               if (!exists)             {                 var field = list.Fields.GetFieldByInternalName(Severity);                 if (field != null)                     defaultView.ViewFields.Add(field);                 field = list.Fields.GetFieldByInternalName(Category);                 if (field != null)                     defaultView.ViewFields.Add(field);                 defaultView.Update();                   var sortDoc = new XmlDocument();                 sortDoc.LoadXml(string.Format("<Query>{0}</Query>", defaultView.Query));                 var orderBy = (XmlElement) sortDoc.SelectSingleNode("//OrderBy");                 if (orderBy != null && sortDoc.DocumentElement != null)                     sortDoc.DocumentElement.RemoveChild(orderBy);                 orderBy = sortDoc.CreateElement("OrderBy");                 sortDoc.DocumentElement.AppendChild(orderBy);                 field = list.Fields[SPBuiltInFieldId.Modified];                 var fieldRef = sortDoc.CreateElement("FieldRef");                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                   fieldRef = sortDoc.CreateElement("FieldRef");                 field = list.Fields[SPBuiltInFieldId.ID];                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                 defaultView.Query = sortDoc.DocumentElement.InnerXml;                 //defaultView.Query = "<OrderBy><FieldRef Name='Modified' Ascending='FALSE' /><FieldRef Name='ID' Ascending='FALSE' /></OrderBy>";                 defaultView.Update();             }         } First two lines are easy - see if the default view includes the "Severity" column. If it does - quit; our job here is done.Adding "severity" and "Category" to the view is not exactly rocket science. But then? Then we build the sort order query. Through XML. The lines are numerous, but boring. All to achieve the CAML query which is commented out. The major benefit of using the dom to build XML, is that you may get compile time errors for spelling mistakes. I say 'may', because although the compiler will not let you forget to close a tag, it will cheerfully let you spell "Name" as "Naem". Whichever you prefer, at the end of the day the view will sort by modified date and ID, both descending. I added the ID as there may be several items with the same time stamp. So! Simple logging to a list, with sensible a view, and with normal functionality for creating your own filterings. I should probably have added some more views in code, ready filtered for "only errors", "errors and warnings" etc. And it would be nice to block verbose logging completely, but I'm not happy with the alternatives. (yetanotherfeature or an admin page seem like overkill - perhaps just removing it as one of the choices, and not log if it isn't there?) Before you comment - yes, try-catches have been removed for clarity. There is nothing worse than having a logging function that breaks your site!

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >