Search Results

Search found 2127 results on 86 pages for 'early bird'.

Page 79/86 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Setting useLegacyV2RuntimeActivationPolicy At Runtime

    - by Reed
    Version 4.0 of the .NET Framework included a new CLR which is almost entirely backwards compatible with the 2.0 version of the CLR.  However, by default, mixed-mode assemblies targeting .NET 3.5sp1 and earlier will fail to load in a .NET 4 application.  Fixing this requires setting useLegacyV2RuntimeActivationPolicy in your app.Config for the application.  While there are many good reasons for this decision, there are times when this is extremely frustrating, especially when writing a library.  As such, there are (rare) times when it would be beneficial to set this in code, at runtime, as well as verify that it’s running correctly prior to receiving a FileLoadException. Typically, loading a pre-.NET 4 mixed mode assembly is handled simply by changing your app.Config file, and including the relevant attribute in the startup element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } This causes your application to run correctly, and load the older, mixed-mode assembly without issues. For full details on what’s happening here and why, I recommend reading Mark Miller’s detailed explanation of this attribute and the reasoning behind it. Before I show any code, let me say: I strongly recommend using the official approach of using app.config to set this policy. That being said, there are (rare) times when, for one reason or another, changing the application configuration file is less than ideal. While this is the supported approach to handling this issue, the CLR Hosting API includes a means of setting this programmatically via the ICLRRuntimeInfo interface.  Normally, this is used if you’re hosting the CLR in a native application in order to set this, at runtime, prior to loading the assemblies.  However, the F# Samples include a nice trick showing how to load this API and bind this policy, at runtime.  This was required in order to host the Managed DirectX API, which is built against an older version of the CLR. This is fairly easy to port to C#.  Instead of a direct port, I also added a little addition – by trapping the COM exception received if unable to bind (which will occur if the 2.0 CLR is already bound), I also allow a runtime check of whether this property was setup properly: public static class RuntimePolicyHelper { public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; } static RuntimePolicyHelper() { ICLRRuntimeInfo clrRuntimeInfo = (ICLRRuntimeInfo)RuntimeEnvironment.GetRuntimeInterfaceAsObject( Guid.Empty, typeof(ICLRRuntimeInfo).GUID); try { clrRuntimeInfo.BindAsLegacyV2Runtime(); LegacyV2RuntimeEnabledSuccessfully = true; } catch (COMException) { // This occurs with an HRESULT meaning // "A different runtime was already bound to the legacy CLR version 2 activation policy." LegacyV2RuntimeEnabledSuccessfully = false; } } [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("BD39D1D2-BA2F-486A-89B0-B4B0CB466891")] private interface ICLRRuntimeInfo { void xGetVersionString(); void xGetRuntimeDirectory(); void xIsLoaded(); void xIsLoadable(); void xLoadErrorString(); void xLoadLibrary(); void xGetProcAddress(); void xGetInterface(); void xSetDefaultStartupFlags(); void xGetDefaultStartupFlags(); [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)] void BindAsLegacyV2Runtime(); } } Using this, it’s possible to not only set this at runtime, but also verify, prior to loading your mixed mode assembly, whether this will succeed. In my case, this was quite useful – I am working on a library purely for internal use which uses a numerical package that is supplied with both a completely managed as well as a native solver.  The native solver uses a CLR 2 mixed-mode assembly, but is dramatically faster than the pure managed approach.  By checking RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully at runtime, I can decide whether to enable the native solver, and only do so if I successfully bound this policy. There are some tricks required here – To enable this sort of fallback behavior, you must make these checks in a type that doesn’t cause the mixed mode assembly to be loaded.  In my case, this forced me to encapsulate the library I was using entirely in a separate class, perform the check, then pass through the required calls to that class.  Otherwise, the library will load before the hosting process gets enabled, which in turn will fail. This code will also, of course, try to enable the runtime policy before the first time you use this class – which typically means just before the first time you check the boolean value.  As a result, checking this early on in the application is more likely to allow it to work. Finally, if you’re using a library, this has to be called prior to the 2.0 CLR loading.  This will cause it to fail if you try to use it to enable this policy in a plugin for most third party applications that don’t have their app.config setup properly, as they will likely have already loaded the 2.0 runtime. As an example, take a simple audio player.  The code below shows how this can be used to properly, at runtime, only use the “native” API if this will succeed, and fallback (or raise a nicer exception) if this will fail: public class AudioPlayer { private IAudioEngine audioEngine; public AudioPlayer() { if (RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully) { // This will load a CLR 2 mixed mode assembly this.audioEngine = new AudioEngineNative(); } else { this.audioEngine = new AudioEngineManaged(); } } public void Play(string filename) { this.audioEngine.Play(filename); } } Now – the warning: This approach works, but I would be very hesitant to use it in public facing production code, especially for anything other than initializing your own application.  While this should work in a library, using it has a very nasty side effect: you change the runtime policy of the executing application in a way that is very hidden and non-obvious.

    Read the article

  • Are Chromebooks the New Netbooks, and What Does That Mean?

    - by Chris Hoffman
    Netbooks — small, cheap, slow laptops — were once very popular. They fell out of favor — people bought them because they seemed cheap and portable, but the actual experience was lackluster. Most netbooks now sit unused. Windows netbooks have vanished from stores today, but there’s a new super-cheap laptop — the Chromebook. Chromebook sales numbers are impressive, but their usage statistics tell a different story. Are Chromebooks just the new netbook? The Problem With Netbooks Netbooks seemed appealing, especially in an age before tablets and lightweight ultrabooks. You could buy a netbook for $200 or so and have a portable device that let you get on the Internet. The name “netbook” spelled that out — it was a portable device for getting on the ‘net. They weren’t really that great. The original netbook was a lightweight Asus Eee PC that ran Linux alone and had a small amount of fast flash storage. Netbooks eventually ran heavier Windows XP operating systems — Windows Vista was out, but it was just too bloated to run on netbooks. Manufacturers added slow magnetic hard drives, bloatware, and even DVD drives! They couldn’t run most Windows software very well. The build quality was poor and their keyboards were tiny and cramped. People liked the idea of a lightweight device that let them get on the Internet and loved the cheap price, but the actual experience wasn’t great. Chromebook Sales Chromebook sales numbers seem surprisingly high. NPD reported that Chromebooks were 21% of all notebooks sold in the US in 2013. If you combine laptop and tablet sales into a single statistic, Chromebooks were 9.6% of all those devices sold. That’s 2/3 as many Chromebooks sold as iPads in the US! Of Amazon’s best-selling laptop computers, two of the top three are Chromebooks. These definitely look like successful products. Unlike netbooks, Chromebooks are taking off in a big way in the education market. Many schools are buying Chromebooks for their students instead of more expensive Windows laptops. They’re easier to manage and lock down than Windows laptops, but — more importantly for cash-strapped schools — they’re very cheap. Netbooks never had this sort of momentum in schools. Chromebook Usage Statistics Here’s where the rosy picture of Chromebooks starts to become more realistic. StatCounter’s browser usage statistics show how widely used different operating systems are. For example, Windows 7 has the highest share with 35.71% of web activity in April, 2014. The chart doesn’t even show Chrome OS at all, although there is an “Other” number near the bottom. Click the Download Data link to download a CSV file and we can view more detailed information. Chrome OS only accounted for 0.38% of web usage in April, 2014. Desktop Linux, which people often shrug at, accounted for 1.52% in the same month. To its credit, Chrome OS usage has increased. Chromebooks were widely mocked back in November, 2013 when the sales numbers came out. After all, they only accounted for 0.11% of web usage globally in November, 2013! But Chrome OS numbers have been improving: Nov, 2013: 0.11% Dec, 2013: 0.22% Jan, 2014: 0.31% Feb, 2014: 0.35% Mar, 2014: 0.36% Apr, 2014: 0.38% Chrome OS is climbing, but it’s definitely still in the “Other” category. It isn’t as high as we’d expect to see it with those types of sales numbers. Chromebooks vs. Netbooks Chromebooks are more limited devices than traditional PCs. You can do quite a few things, but you have to do it all using Chrome or Chrome apps. Most people won’t be enabling developer mode and installing a Linux desktop. You don’t have access to the powerful desktop software available for Windows and even Mac OS X. On the other hand, these Chromebooks are less compromised than netbooks in many ways. They come with a lightweight operating system designed for portable, mobile devices. They don’t come packed with any bloatware, like the bloatware you’ll find on competing Windows PCs and the original netbooks. They’re cheaper because the manufacturer doesn’t have to pay for a Windows license. There’s no need for antivirus software weighing the operating system down. They’re larger than the original netbooks, with many of them being 11.6-inches instead of the original 8-inch bodies many older netbooks came with. They have larger, more comfortable keyboards and fast solid-state storage. Really, Chromebooks are what netbooks wanted to be. People didn’t buy netbooks to use typical Windows software — they just wanted a lightweight PC. Of course, for many people, the real successor to netbooks is tablets. If all you want is a portable device to throw in a bag so you can get online, maybe a tablet is better. Where Does This Leave Chromebooks? So, are Chromebooks the new netbooks? It’s a bit early to answer that question. Chromebooks are definitely not out of the competition — their sales look good and their usage share is increasing. On the other hand, Chrome OS is still pretty far behind. They’re not catching fire like tablets did. Maybe netbooks were just before their time and Chromebooks were what they were always meant to be. Just as Microsoft’s Windows XP tablets failed, Windows XP netbooks also failed. Tablets took off with a more refined operating system on better hardware years later. “Netbooks” — or Chromebooks — are now taking off with a more purpose-built operating system on better hardware, too. It’s hard to count Chromebooks out because they provide a much better experience than netbooks ever did. If you’re one of the people who wants to use old Windows desktop apps on your portable laptop, you may think netbooks were better — but most people don’t want that. But maybe people either want a full desktop PC experience or a full mobile tablet experience. Is there a place for a laptop with a keyboard that can only view websites? We’ll have to wait and see. Image Credit: Kevin Jarret on Flickr, Clive Darra on Flickr, Sean Freese on Flickr

    Read the article

  • Mobile BI Comes of Age

    - by rich.clayton(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} One of the hot topics in the Business Intelligence industry is mobility.  More specifically the question is how business can be transformed by the iPhone and the iPad.  In June 2003, Gartner predicted that Mobile BI would be obsolete and that the technology was headed for the 'trough of disillusionment'.  I agreed with them at that time.  Many vendors like MicroStrategy and Business Objects jumped into the fray attempting to show how PDA's like Palm Pilots could be integrated with BI.  Their investments resulted in interesting demos with no commercial traction.  Why, because wireless networks and mobile operating systems were primitive, immature and slow. In my opinion, Apple's iOS has changed everything in Mobile BI.  Yes Blackberry, Android and Symbian and all the rest have their place in the market but I believe that increasingly consumers (not IT departments) influence BI decision making processes.  Consumers are choosing the iPhone and the iPad. The number of iPads I see in business meetings now is staggering.  Some use it for email and note taking and others are starting to use corporate applications.  The possibilities for Mobile BI are countless and I would expect to see iPads enterprise-wide over the next few years.   These new devices will provide just-in-time access to critical business information.  Front-line managers interacting with customers, suppliers, patients or citizens will have information literally at their fingertips. I've experimented with several mobile BI tools.  They look cool but like their Executive Information System (EIS) predecessors of the 1990's these tools lack a backbone and a plausible integration strategy.  EIS was a viral technology in the early 1990's.  Executives from every industry and job function were showcasing their dashboards to fellow co-workers and colleagues at the country club.  Just like the iPad, every senior manager wanted one.  EIS wasn't a device however, it was a software application.   EIS quickly faded into the software sunset as it lacked integration with corporate information systems.  BI servers  replaced EIS because the technology focused on the heavy data lifting of integrating, normalizing, aggregating and managing large, complex data volumes.  The devices are here to stay. The cute stand-alone mobile BI tools, not so much. If all you're looking to do is put Excel files on your iPad, there are plenty of free tools on the market.  You'll look cool at your next management meeting but after a few weeks, the cool factor will fade away and you'll be wondering how you will ever maintain it.  If however you want secure, consistent, reliable information on your iPad, you need an integration strategy and a way to model the data.  BI Server technologies like the Oracle BI Foundation is a market leading approach to tackle that issue. I liken the BI mobility frenzy to buying classic cars.  Classic Cars have two buying groups - teenagers and middle-age folks looking to tinker.  Teenagers look at the pin-stripes and the paint job while middle-agers (like me)  kick the tires a bit and look under the hood to check out the quality and reliability of the engine.  Mobile BI tools sure look sexy but don't go very far without an engine and a transmission or an integration strategy. The strategic question in Mobile BI is can these startups build a motor and transmission faster than Oracle can re-paint the car?  Oracle has a great engine and a transmission that connects to all enterprise information assets.  We're working on the new paint job and are excited about the possibilities.  Just as vertical integration worked in the automotive business, it too works in the technology industry.

    Read the article

  • Agile Testing Days 2012 – Day 3 – Agile or agile?

    - by Chris George
    Another early start for my last Lean Coffee of the conference, and again it was not wasted. We had some really interesting discussions around how to determine what test automation is useful, if agile is not faster, why do it? and a rather existential discussion on whether unicorns exist! First keynote of the day was entitled “Fast Feedback Teams” by Ola Ellnestam. Again this relates nicely to the releasing faster talk on day 2, and something that we are looking at and some teams are actively trying. Introducing the notion of feedback, Ola describes a game he wrote for his eldest child. It was a simple game where every time he clicked a button, it displayed “You’ve Won!”. He then changed it to be a Win-Lose-Win-Lose pattern and watched the feedback from his son who then twigged the pattern and got his younger brother to play, alternating turns… genius! (must do that with my children). The idea behind this was that you need that feedback loop to learn and progress. If you are not getting the feedback you need to close that loop. An interesting point Ola made was to solve problems BEFORE writing software. It may be that you don’t have to write anything at all, perhaps it’s a communication/training issue? Perhaps the problem can be solved another way. Writing software, although it’s the business we are in, is expensive, and this should be taken into account. He again mentions frequent releases, and how they should be made as soon as stuff is ready to be released, don’t leave stuff on the shelf cause it’s not earning you anything, money or data. I totally agree with this and it’s something that we will be aiming for moving forwards. “Exceptions, Assumptions and Ambiguity: Finding the truth behind the story” by David Evans started off very promising by making references to ‘Grim up North’ referring to the north of England. Not sure it was appreciated by most of the audience, but it made me laugh! David explained how there are always risks associated with exceptions, giving the example of a one-way road near where he lives, with an exception sign giving rights to coaches to go the wrong way. Therefore you could merrily swing around the corner of the one way road straight into a coach! David showed the danger in making assumptions with lyrical quotes from Lola by The Kinks “I’m glad I’m a man, and so is Lola” and with a picture of a toilet flush that needed instructions to operate the full and half flush. With this particular flush, you pulled the handle all the way down to half flush, and half way down to full flush! hmmm, a bit of a crappy user experience methinks! Then through a clever use of a passage from the Jabberwocky, David then went onto show how mis-translation/ambiguity is the can completely distort the original meaning of something, and this is a real enemy of software development. This was all helping to demonstrate that the term Story is often heavily overloaded in the Agile world, and should really be stripped back to what it is really for, stating a business problem, and offering a technical solution. Therefore a story could be worded as “In order to {make some improvement}, we will { do something}”. The first ‘in order to’ statement is stakeholder neutral, and states the problem through requesting an improvement to the software/process etc. The second part of the story is the verb, the doing bit. So to achieve the ‘improvement’ which is not currently true, we will do something to make this true in the future. My PM is very interested in this, and he’s observed some of the problems of overloading stories so I’m hoping between us we can use some of David’s suggestions to help clarify our stories better. The second keynote of the day (and our last) proved to be the most entertaining and exhausting of the conference for me. “The ongoing evolution of testing in agile development” by Scott Barber. I’ve never had the pleasure of seeing Scott before… OMG I would love to have even half of the energy he has! What struck me during this presentation was Scott’s explanation of how testing has become the role/job that it is (largely) today, and how this has led to the need for ‘methodologies’ to make dev and test work! The argument that we should be trying to converge the roles again is a very valid one, and one that a couple of the teams at work are actively doing with great results. Making developers as responsible for quality as testers is something that has been lost over the years, but something that we are now striving to achieve. The idea that we (testers) should be testing experts/specialists, not testing ‘union members’, supports this idea so the entire team works on all aspects of a feature/product, with the ‘specialists’ taking the lead and advising/coaching the others. This leads to better propagation of information around the team, a greater holistic understanding of the project and it allows the team to continue functioning if some of it’s members are off sick, for example. Feeling somewhat drained from Scott’s keynote (but at the same time excited that alot of the points he raised supported actions we are taking at work), I headed into my last presentation for Agile Testing Days 2012 before having to make my way to Tegel to catch the flight home. “Thinking and working agile in an unbending world” with Pete Walen was a talk I was not going to miss! Having spoken to Pete several times during the past few days, I was looking forward to hearing what he was going to say, and I was not disappointed. Pete started off by trying to separate the definitions of ‘Agile’ as in the methodology, and ‘agile’ as in the adjective by pronouncing them the ‘english’ and ‘american’ ways. So Agile pronounced (Ajyle) and agile pronounced (ajul). There was much confusion around what the hell he was talking about, although I thought it was quite clear. Agile – Software development methodology agile – Marked by ready ability to move with quick easy grace; Having a quick resourceful and adaptable character. Anyway, that aside (although it provided a few laughs during the presentation), the point was that many teams that claim to be ‘Agile’ but are not, in fact, ‘agile’ by nature. Implementing ‘Agile’ methodologies that are so prescriptive actually goes against the very nature of Agile development where a team should anticipate, adapt and explore. Pete made a valid point that very few companies intentionally put up roadblocks to impede work, so if work is being blocked/delayed, why? This is where being agile as a team pays off because the team can inspect what’s going on, explore options and adapt their processes. It is through experimentation (and that means trying and failing as well as trying and succeeding) that a team will improve and grow leading to focussing on what really needs to be done to achieve X. So, that was it, the last talk of our conference. I was gutted that we had to miss the closing keynote from Matt Heusser, as Matt was another person I had spoken too a few times during the conference, but the flight would not wait, and just as well we left when we did because the traffic was a nightmare! My Takeaway Triple from Day 3: Release often and release small – don’t leave stuff on the shelf Keep the meaning of the word ‘agile’ in mind when working in ‘Agile Look at testing as more of a skill than a role  

    Read the article

  • Thou shalt not put code on a piedestal - Code is a tool, no more, no less

    - by Ralf Westphal
    “Write great code and everything else becomes easier” is what Paul Pagel believes in. That´s his version of an adage by Brian Marick he cites: “treat code as an end, not just a means.” And he concludes: “My post-Agile world is software craftsmanship.” I wonder, if that´s really the way to go. Will “simply” writing great code lead the software industry into the light? He´s alluding to the philosopher Kant who proposed, a human beings should never be treated as a means, but always as an end. But should we transfer this ethical statement into the world of software? I doubt it.   Reason #1: Human beings are categorially different from code. They are autonomous entities who need to find a way of living happily together. To Kant it seemed this goal could only be reached if nobody (ab)used a human being for his/her purposes. Because using a human being, i.e. treating it as a means, would contradict the fundamental autonomy and freedom of human beings. People should hold up a symmetric view of their relationships: Since nobody wants to be (ab)used, nobody should (ab)use anybody else. If you want to be treated decently, with respect, in accordance with your own free will - which means as an end - then do the same to other people. Code is dead, it´s a product, it´s a tool for people to reach their goals. No company spends any money on code other than to save money or earn money in the long run. Code is not a puppy. Enterprises do not commission software development to just feel good in its company. Code is not a buddy. Code is a slave, if you will. A mechanical slave, a non-tangible robot. Code is a tool, is a tool. And if we start to treat it differently, if we elevate its status unduely… I guess that will contort our relationship in a contraproductive way. Please get me right: Just because something is “just a tool”, “just a product” does not mean we should not be careful while designing, building, using it. Right to the contrary. We should be very careful when writing code – but not for the code´s sake! We should be careful because we respect our customers who are fellow human beings who should be treated as an end. If we are careless, neglectful, ignorant when producing code on their behalf, then we´re using them. Being sloppy means you´re caring more for yourself that for your customer. You´re then treating the customer as a means to fulfill some of your own needs. That´s plain unethical behavior.   Reason #2: The focus should always be on your purpose, not on any tool. But if code is treated as an end, then the focus is on the code. That might sound right, because where else should be your focus as a software developer? But, well, I´d say, your focus should be on delivering value to your customer. Because in the end your customer does not care if you write a single line of code. She just wants her problem to be solved. Solving problems is the purpose of any contractor. Code must be treated just as a means, a tool we know how to handle very well. But if we´re really trying to be craftsmen then we should be conscious about exactly that and act ethically. That means we must never be so focused on our tool as to be unable to suggest better solutions to the problems of our customers than code.   I´m all with Paul when he urges us to “Write great code”. Sure, if you need to write code, then by all means do so. Write the best code you can think of – and then try to improve it. Paul has all the best intentions when he signs Brians “treat code as an end” - but as we all know: “The road to hell is paved with best intentions” ;-) Yes, I can imagine a “hell of code focus”. In fact, I don´t need to imagine it, I´m seeing it quite often. Because code hell is whereever two developers stand together and are so immersed in talking about all sorts of coding tricks, design patterns, code smells, technologies, platforms, tools that they lose sight of the big picture. Talking about TDD or SOLID or refactoring is a sign of consciousness – relative to the “cowboy coders” view of the world. But from yet another point of view TDD, SOLID, and refactoring are just cures for ailments within a system. And I fear, if “Writing great code” is the only focus or the main focus of software development, then we as an industry lose the ability to see that. Focus draws a line around something, it defines a horizon for perceptions and thinking. So if we focus on code our horizon ends where “the land of code” ends. I don´t think that should be our professional attitude.   So what about Software Craftsmanship as the next big thing after Agility? I think Software Craftsmanship has an important message for all software developers and beyond. But to make it the successor of the Agility movement seems to miss a point. Agility never claimed to solve all software development problems, I´d say. So to blame it for having missed out on certain aspects of it is wrong. If I had to summarize Agility in one word I´d say “Value”. Agility put value for the customer back in software development. Focus on delivering value early and often – that´s Agility´s mantra. All else follows from that. And I ask you: Is that obsolete? Is delivering value not hip anymore? No, sure not. That´s our very purpose as software developers. So how can Agility become obsolete and need to be replaced? We need to do away with this “either/or”-thinking. It´s either Agility or Lean or Software Craftsmanship or whatnot. Instead we should start integrating concepts and movements. Think “both/and”. Think Agility plus Software Craftsmanship plus Lean plus whatnot. We don´t neet to tear down anything from a piedestal and replace it with a new idol. Instead we should do away with piedestals and arrange whatever is helpful is a circle. Then we can turn to concepts, movements for whatever they are best. After 10 years of Agility we should be able to identify what it was good at – and keep that. Keep Agility around and add whatever Agility was lacking or never concerned with. Add whatever is at the core of Software Craftsmanship. Add whatever is at the core of Lean etc. But don´t call out the age of Post-Agility. Because it better never will end. Because once we start to lose Agility´s core we´re losing focus of the customer.

    Read the article

  • Corsair Hackers Reboot

    It wasn't easy for me to attend but it was absolutely worth to go. The Linux User Group of Mauritius (LUGM) organised another get-together for any open source enthusiast here on the island. Strangely named "Corsair Hackers Reboot" but it stands for a positive cause: "Corsair Hackers Reboot Event A collaborative activity involving LUGM, UoM Computer Club, Fortune Way Shopping Mall and several geeks from around the island, striving to put FOSS into homes & offices. The public is invited to discover and explore Free Software & Open Source." And it was a good opportunity for me and the kids to visit the east coast of Mauritius, too. Perfect timing It couldn't have been better... Why? Well, for two important reasons (in terms of IT): End of support for Microsoft Windows XP - 08.04.2014 Release of Ubuntu 14.04 Long Term Support - 17.04.2014 Quite funnily, those two IT dates weren't the initial reasons and only during the weeks of preparations we put those together. And therefore it was even more positive to promote the use of Linux and open source software in general to a broader audience. Getting there ... Thanks to the new motor way M3 and all the additional road work which has been completed recently it was very simple to get across the island in a very quick and relaxed manner. Compared to my trips in the early days of living in Mauritius (and riding on a scooter) it was very smooth and within less than an hour we hit Centrale de Flacq. Well, being in the city doesn't necessarily mean that one has arrived at the destination. But thanks to modern technology I had a quick look on Google Maps, and we finally managed to get a parking behind the huge bus terminal in Flacq. From there it was just a short walk to Fortune Way. The children were trying to count the number of buses... Well, lots and lots of buses - really impressive actually. What was presented? There were different areas set up. Right at the entrance one's attention was directly drawn towards the elevated hacker's stage. Similar to rock stars performing their gig there was bunch of computers, laptops and networking equipment in order to cater the right working conditions for coding/programming challenge(s) on the one hand and for the pen-testing or system hacking competition on the other hand. Personally, I was very impresses that actually Nitin took care of the pen-testing competition. He hardly started one year back with Linux in general, and Kali Linux specifically. Seeing his personal development from absolute newbie to a decent Linux system administrator within such a short period of time, is really impressive. His passion to open source software made him a living. Next, clock-wise seen, was the Kid's Corner with face-painting as the main attraction. Additionally, there were numerous paper print outs to colour. Plus a decent workstation with the educational suite GCompris. Of course, my little ones were into that. They already know GCompris since a while as they are allowed to use it on an IGEL thin client terminal here at home. To simplify my life, I set up GCompris as full-screen guest session on the server, and they can pass the login screen without any further obstacles. And because it's a thin client hooked up to a XDMCP remote session I don't have to worry about the hardware on their desk, too. The next section was the main attraction of the event: BYOD - Bring Your Own Device Well, compared to the usual context of BYOD the corsairs had a completely different intention. Here, you could bring your own laptop and a team of knowledgeable experts - read: geeks and so on - offered to fully convert your system on any Linux distribution of your choice. And even though I came later, I was told that the USB pen drives had been in permanent use. From being prepared via dd command over launching LiveCD session to finally installing a fresh Linux system on bare metal. Most interestingly, I did a similar job already a couple of months ago, while upgrading an existing Windows XP system to Xubuntu 13.10. So far, the female owner is very happy and enjoys her system almost every evening to go shopping online, checking mails, and reading latest news from the Anime world. Back to the Hackers event, Ish told me that they managed approximately 20 conversion during the day. Furthermore, Ajay and others gladly assisted some visitors with some tricky issues and by the end of the day you can call is a success. While I was around, there was a elderly male visitor that got a full-fledged system conversion to a Linux system running completely in French language. A little bit more to the centre it was Yasir's turn to demonstrate his Arduino hardware that he hooked up with an experimental electrical circuit board connected to an LCD matrix display. That's the real spirit of hacking, and he showed some minor adjustments on the fly while demo'ing the system. Also, very interesting there was a thermal sensor around. Personally, I think that platforms like the Arduino as well as the Raspberry Pi have a great potential at a very affordable price in order to bring a better understanding of electronics as well as computer programming to a broader audience. It would be great to see more of those experiments during future activities. And last but not least there were a small number of vendors. Amongst them was Emtel - once again as sponsor of the general internet connectivity - and another hardware supplier from Riche Terre shopping mall. They had a good collection of Android related gimmicks, like a autonomous web cam that can convert any TV with HDMI connector into an online video chat system given WiFi. It's actually kind of awesome to have a Skype or Google hangout video session on the big screen rather than on the laptop. Some pictures of the event LUGM: Great conversations on Linux, open source and free software during the Corsair Hackers Reboot LUGM: Educational workstation running GCompris suite attracted the youngest attendees of the day. Of course, face painting had to be done prior to hacking... LUGM: Nadim demoing some Linux specifics to interested visitors. Everyone was pretty busy during the whole day LUGM: The hacking competition, here pen-testing a wireless connection and access point between multiple machines LUGM: Well prepared workstations to be able to 'upgrade' visitors' machines to any Linux operating system Final thoughts Gratefully, during the preparations of the event I was invited to leave some comments or suggestions, and the team of the LUGM did a great job. The outdoor banner was a eye-catcher, the various flyers and posters for the event were clearly written and as far as I understood from the quick chats I had with Ish, Nadim, Nitin, Ajay, and of course others all were very happy about the event execution. Great job, LUGM! And I'm already looking forward to the next Corsair Hackers Reboot event ... Crossing fingers: Very soon and hopefully this year again :) Update: In the media The event had been announced in local media, too. L'Express: Salon informatique: Hacking Challenge à Flacq

    Read the article

  • REGISTER NOW! Oracle Hardware Sales Training: Hardware and Software - Engineered to Be Sold Together

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} You can now register for Oracle’s EMEA Hardware Sales Training Roadshow: "Hardware and Software - Engineered to be Sold Together!" The objective of this one-day, face-to-face, free of charge training session is to share with you and your Oracle peers the latest information on Oracle’s products and solutions and to ensure that you are fully equipped to position and sell Oracle’s integrated stack. Please find agenda, schedule, details and registration information here. The EMEA Hardware Sales Training Roadshow is intended for Oracle Partners and Oracle Sales working together. Limited seats are available on a first-come-first-serve basis, so kindly register as early as possible to reserve your seat.

    Read the article

  • CodePlex Daily Summary for Monday, August 11, 2014

    CodePlex Daily Summary for Monday, August 11, 2014Popular ReleasesSpace Engineers Server Manager: SESM V1.15: V1.15 - Updated Quartz library - Correct a bug in the new mod managment - Added a warning if you have backup enabled on a server but no static map configuredAspose for Apache POI: Missing Features of Apache POI SS - v 1.2: Release contain the Missing Features in Apache POI SS SDK in comparison with Aspose.Cells What's New ? Following Examples: Create Pivot Charts Detect Merged Cells Sort Data Printing Workbooks Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.AngularGo (SPA Project Template): AngularGo.VS2013.vsix: First ReleaseTouchmote: Touchmote 1.0 beta 13: Changes Less GPU usage Works together with other Xbox 360 controls Bug fixesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v3.0: Important: I would like to hear more about what you are thinking about the project? I appreciate that you like it (2000 downloads over past 6 months), but may be you have to say something? What do you dislike in the module? Maybe you would love to see some new functionality? Tell, what you think! Installation guide:Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI ...Modern UI for WPF: Modern UI 1.0.6: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE LinkGroup.GroupName renamed to GroupKey NEW FEATURES Improved rendering on high DPI screens, including support for per-monitor DPI awareness available in Windows 8.1 (see also Per-monitor DPI awareness) New ModernProgressRing control with 8 builtin styles New LinkCommands.NavigateLink routed command New Visual Studio project templates 'Modern UI WPF App' and 'Modern UI W...ClosedXML - The easy way to OpenXML: ClosedXML 0.74.0: Multiple thread safe improvements including AdjustToContents XLHelper XLColor_Static IntergerExtensions.ToStringLookup Exception now thrown when saving a workbook with no sheets, instead of creating a corrupt workbook Fix for hyperlinks with non-ASCII Characters Added basic workbook protection Fix for error thrown, when a spreadsheet contained comments and images Fix to Trim function Fix Invalid operation Exception thrown when the formula functions MAX, MIN, and AVG referenc...SEToolbox: SEToolbox 01.042.019 Release 1: Added RadioAntenna broadcast name to ship name detail. Added two additional columns for Asteroid material generation for Asteroid Fields. Added Mass and Block number columns to main display. Added Ellipsis to some columns on main display to reduce name confusion. Added correct SE version number in file when saving. Re-added in reattaching Motor when drag/dropping or importing ships (KeenSH have added RotorEntityId back in after removing it months ago). Added option to export and r...jQuery List DragSort: jQuery List DragSort 0.5.2: Fixed scrollContainer removing deprecated use of $.browser so should now work with latest version of jQuery. Added the ability to return false in dragEnd to revert sort order Project changes Added nuget package for dragsort https://www.nuget.org/packages/dragsort Converted repository from SVN to MercurialBraintree Client Library: Braintree 2.32.0: Allow credit card verification options to be passed outside of the nonce for PaymentMethod.create Allow billingaddress parameters and billingaddress_id to be passed outside of the nonce for PaymentMethod.create Add Subscriptions to paypal accounts Add PaymentMethod.update Add failonduplicatepaymentmethod option to PaymentMethod.create Add support for dispute webhooksThe Mario Kart 8 App: V1.0.2.1: First Codeplex release. WINDOWS INSTALLER ONLYAspose Java for Docx4j: Aspose.Words vs Docx4j - v 1.0: Release contain the Code Comparison for Features in Docx4j SDK and Aspose.Words What's New ?Following Examples: Accessing Document Properties Add Bookmarks Convert to Formats Delete Bookmarks Working with Comments Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.File System Security PowerShell Module: NTFSSecurity 2.4.1: Add-Access and Remove-Access now take multiple accoutsYourSqlDba: YourSqlDba 5.2.1.: This version improves alert message that comes a while after you install the script. First it says to get it from YourSqlDba.CodePlex.com If you don't want to update now, just-rerun the script from your installed version. To get actual version running just execute install.PrintVersionInfo. . You can go to source code / history and click on change set 72957 to see changes in the script.Manipulator: Manipulator: manipulatorXNB filetype plugin for Paint.NET: Paint.NET XNB plugin v0.4.0.0: CHANGELOG Reverted old incomplete changes. Updated library for compatibility with Paint .NET 4. Updated project to NET 4.5. Updated version to 0.4.0.0. INSTALLATION INSTRUCTIONS Extract the ZIP file to your Paint.NET\FileTypes folder.EdiFabric: Release 4.1: Changed MessageContextWix# (WixSharp) - managed interface for WiX: Release 1.0.0.0: Release 1.0.0.0 Custom UI Custom MSI Dialog Custom CLR Dialog External UIMath.NET Numerics: Math.NET Numerics v3.2.0: Linear Algebra: Vector.Map2 (map2 in F#), storage-optimized Linear Algebra: fix RemoveColumn/Row early index bound check (was not strict enough) Statistics: Entropy ~Jeff Mastry Interpolation: use Array.BinarySearch instead of local implementation ~Candy Chiu Resources: fix a corrupted exception message string Portable Build: support .Net 4.0 as well by using profile 328 instead of 344. .Net 3.5: F# extensions now support .Net 3.5 as well .Net 3.5: NuGet package now contains pro...babelua: 1.6.5.1: V1.6.5.1 - 2014.8.7New feature: Formatting code; Stability improvement: fix a bug that pop up error "System.Net.WebResponse EndGetResponse";New ProjectsDouDou: a little project.Dynamic MVC: Dynamically generate views from your model objects for a data centric MVC application.EasyDb - Simple Data Access: EasyDb is a simple library for data access that allows you to write less code.ExpressToAbroad: just go!!!!!Full Silverlight Web Video/Voice Conferencing: The Goal of this project is to provide complete Open Source (Voice/Video Chatting Client/Server) Modules Using SilverlightGaia: Gaia is an app for Windows plataform, Gaia is like Siri and Google Now or Betty but Gaia use only text commands.pxctest: pxctestSTACS: Career Management System for MIT by Team "STACS"StrongWorld: StrongWorld.WebSuiteXevas Tools: Xevas is a professional coders group of 'Nimbuzz'. We make all tools for worldwide users of nimbuzz at free of cost.????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ?????????

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • 5 Lessons learnt in localization / multi language support in WPF

    - by MarkPearl
    For the last few months I have been secretly working away at the second version of an application that we initially released a few years ago. It’s called MaxCut and it is a free panel/cut optimizer for the woodwork, glass and metal industry. One of the motivations for writing MaxCut was to get an end to end experience in developing an application for general consumption. From the early days of v1 of MaxCut I would get the odd email thanking me for the software and then listing a few suggestions on how to improve it. Two of the most dominant suggestions that we received were… Support for imperial measurements (the original program only supported the metric system) Multi language support (we had someone who volunteered to translate the program into Japanese for us). I am not going to dive into the Imperial to Metric support in todays blog post, but I would like to cover a few brief lessons we learned in adding support for multi-language functionality in the software. I have sectioned them below under different lessons. Lesson 1 – Build multi-language support in from the start So the first lesson I learnt was if you know you are going to do multi language support – build it in from the very beginning! One of the power points of WPF/Silverlight is data binding in XAML and so while it wasn’t to painful to retro fit multi language support into the programing, it was still time consuming and a bit tedious to go through mounds and mounds of views and would have been a minor job to have implemented this while the form was being designed. Lesson 2 – Accommodate for varying word lengths using Grids The next lesson was a little harder to learn and was learnt a bit further down the road in the development cycle. We developed everything in English, assuming that other languages would have similar character length words for equivalent meanings… don’t!. A word that is short in your language may be of varying character lengths in other languages. Some language like Dutch and German allow for concatenation of nouns which has the potential to create really long words. We picked up a few places where our views had been structured incorrectly so that if a word was to long it would get clipped off or cut out. To get around this we began using the WPF grid extensively with column widths that would automatically expand if they needed to. Generally speaking the grid replacement got round this hurdle, and if in future you have a choice between a stack panel or a grid – think twice before going for the easier option… often the grid will be a bit more work to setup, but will be more flexible. Lesson 3 – Separate the separators Our initial run through moving the words to a resource dictionary led us to make what I thought was one potential mistake. If we had a label like the following… “length : “ In the resource dictionary we put it as a single entry. This is fine until you start using a word more than once. For instance in our scenario we used the word “length’ frequently. with different variations of the word with grammar and separators included in the resource we ended up having what I would consider a bloated dictionary. When we removed the separators from the words and put them as their own resources we saw a dramatic reduction in dictionary size… so something that looked like this… “length : “ “length. “ “length?” Was reduced to… “length” “:” “?” “.” While this may not seem like a reduction at first glance, consider that the separators “:?.” are used everywhere and suddenly you see a real reduction in bloat. Lesson 4 – Centralize the Language Dictionary This lesson was learnt at the very end of the project after we had already had a release candidate out in the wild. Because our translations would be done on a volunteer basis and remotely, we wanted it to be really simple for someone to translate our program into another language. As a common design practice we had tiered the application so that we had a business logic layer, a ui layer, etc. The problem was in several of these layers we had resource files specific for that layer. What this resulted in was us having multiple resource files that we would need to send to our translators. To add to our problems, some of the wordings were duplicated in different resource files, which would result in additional frustration from our translators as they felt they were duplicating work. Eventually the workaround was to make a separate project in VS2010 with just the language translations. We then exposed the dictionary as public within this project and made it as a reference to the other projects within the solution. This solved out problem as now we had a central dictionary and could remove any duplication's. Lesson 5 – Make a dummy translation file to test that you haven’t missed anything The final lesson learnt about multi language support in WPF was when checking if you had forgotten to translate anything in the inline code, make a test resource file with dummy data. Ideally you want the data for each word to be identical. In our instance we made one which had all the resource key values pointing to a value of test. This allowed us point the language file to our test resource file and very quickly browse through the program and see if we had missed any linking. The alternative to this approach is to have two language files and swap between the two while running the program to make sure that you haven’t missed anything, but the downside of dual language file approach is that it is much a lot harder spotting a mistake if everything is different – almost like playing Where’s Wally / Waldo. It is much easier spotting variance in uniformity – meaning when you put the “test’ keyword for everything, anything that didn’t say “test” stuck out like a sore thumb. So these are my top five lessons learnt on implementing multi language support in WPF. Feel free to make any suggestions in the comments section if you feel maybe something is more important than one of these or if I got it wrong!

    Read the article

  • I Clobbered a Leopard with a Window Last Night

    - by D'Arcy Lussier
    I’ve had my 15” Mac Book Pro for a little over a year now, and its hands-down the best laptop I’ve ever owned…hardware wise. And I tried, I really really tried, to like OSX. I even bought Parallels so I could run Windows 7 and all my development tools while still trying to live in an OSX world. But in the end, I missed Windows too much. There were just too many shortcomings with OSX that kept me from being productive. For one thing, Office for Mac is *not* Office for Windows. The applications are written by different teams, and Excel on the Mac is just different enough to be painful. The VM experience was adequate, but my MBP would heat up like crazy when running it and the experience trying to get Windows apps to interact with an OSX file system was awkward. And I found I was in the VM more than I thought I’d be. iMovie is not as easy to use for doing simple movie editing as Windows Movie Maker. There’s no free blog editing software for OSX that’s on par with Windows Live Writer. And really, all I was using OSX for was Twitter (which I can use a Windows client for) and web browsing (also something Windows can provide obviously). So I had to ask myself – why am I forcing myself to use an operating system I don’t like, on a laptop that can support Windows 7? And so I paved my MBP and am happily running Windows 7 on it…and its fantastic! All the good stuff with the hardware is still there with the goodness of Win 7. Happy happy. I did run into some snags doing this though, and that’s really what this blog post is about – things to be aware of if you want to install Win 7 directly on your MBP metal. First, Ensure You Have Your Original Mac Install Disk This was a warning my buddy Dylan, who’s been running Win 7 on his MBP for a while now, gave me early on. The reason you need that original disk is that the hardware drivers you need are all located there. Apparently you can’t easily download them, so make sure you have them ahead of time. Second, Forget BootCamp The only reason you need BootCamp is if you still want the option to boot into OSX. If you don’t, then you don’t need BootCamp. In fact, you don’t even need BootCamp to install Win 7. What you *will* need though is a DVD with Win 7 burnt on it. Apple doesn’t support bootable USB drives. Well, actually they do for Mac Book Airs which don’t come with optical drives…but to get it working you’ll need to edit a system file of BootCamp so your make of MBP is included in an XML document, and even then you *still* are using BootCamp meaning you’ll be making an OSX partition. So don’t worry about BootCamp, just burn a Windows 7 disc, put it into the DVD drive, and restart your MBP. Third, Know The Secret Commands So after putting in the Windows 7 DVD and restarting your MBP, you’ll want to hold down the ‘C’ key during boot up. This tells the MBP that it should boot from the DVD drive instead of the hard drive. Interestingly, it appears you don’t have to do this if its the Mac OSX install disc (more on that in a second), but regardless – hold down C and Windows will start the install process. Next up is the partition process. You’ll notice that there’s a partition called ETI or something like that. This has to do with the drive format that Apple uses and how they partition their system drives. What I did – I blew it away! At first I didn’t, but I was told I couldn’t install Windows on the remaining space due to the different drive format. Blowing away the ETI partition (and all other partitions) allowed me to continue the Windows install. *REMEMBER –  No warranty is provided or implied, just telling you what I did and how I got it to work. Ok, so now Windows is installed and I’m rebooting. Everything looks good, but I need drivers! So I put in the OSX install DVD and run the BootCamp assistant which installs all the Windows drivers I need. Fantastic! Oh, I need to restart – no problem. OH NO, PROBLEM! I left the OSX install DVD in the drive and now the MBP wants to boot from the drive and install OSX! I’m not holding down the C key, what the heck?! Ok, well there must be a way to eject this disk…hmm…no physical button on the side…the eject button doesn’t seem to work on the keyboard…no little pin hole to insert something to force the disc out…well what the…?! It turns out, if you want to eject a disc at boot up, you need (and I kid you not) to plug a mouse into the laptop and hold down the right-click button while its booting. This ejected the disc for me. Seriously. Finally, Things You Should Be Aware Of Once you have Windows up and running there’s a few things you need to be aware of, mainly new keyboard shortcuts. For instance, on the Mac keyboard there is no Home, End, PageUp or PageDown. There’s also no obvious way to do something like select large amounts of text (like you would by holding Shift-Home at the end of a line of text for instance). So here’s some shortcuts you need to know: Home – fn + left arrow End – fn + right arrow Select a line of text as you would with the Home key – Shift + fn + left arrow Select a line of text as you would with the End key – Shift + fn + right arrow Page Up – fn + up arrow Page Down – fn + down arrow Also, you’ll notice that the awesome Mac track pad doesn’t respond to taps as clicks. No fear, this is just a setting that needs to be altered in the BootCamp control panel (that controls the Mac Hardware-specific settings within Windows, you can access it easily from the system tray icon) One other thing, battery life seems a bit lower than with OSX, but then again I’m also doing more than Twitter or web browsing on this thing now. Conclusion My laptop runs awesome now that I have Windows 7 on there. It’s obviously up to individual taste, but for me I just didn’t see benefits to living in an OSX world when everything I needed lived in Windows. And also, I finally am back to an operating system that doesn’t require me to eject a USB drive before physically removing it! It’s 2012 folks, how has this not been fixed?! D

    Read the article

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Benefits of Behavior Driven Development

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/07/26/benefits-of-behavior-driven-development.aspxContinuing my previous article on BDD, I wanted to point out some benefits of BDD and since BDD is an extension of Test Driven Development (TDD), you get those as well. I’ll add another article on some possible downsides of this approach. There are many articles about the benefits of TDD and they apply to BDD. I’ve pointed out some here and copied some of the main points for each article, but there are many more including the book The Art of Unit Testing by Roy Osherove. http://geekswithblogs.net/leesblog/archive/2008/04/30/the-benefits-of-test-driven-development.aspx (Lee Brandt) Stability Accountability Design Ability Separated Concerns Progress Indicator http://tddftw.com/benefits-of-tdd/ Help maintainers understand the intention behind the code Bring validation and proper data handling concerns to the forefront. Writing the tests first is fun. Better APIs come from writing testable code. TDD will make you a better developer. http://www.slideshare.net/dhelper/benefit-from-unit-testing-in-the-real-world (from Typemock). Take a look at the slides, especially the extra time required for TDD (slide 10) and the next one of the bugs avoided using TDD (slide 11). Less bugs (slide 11) about testing and development (13) Increase confidence in code (14) Fearlessly change your code (14) Document Requirements (14) also see http://visualstudiomagazine.com/articles/2013/06/01/roc-rocks.aspx Discover usability issues early (14) All these points and articles are great and there are many more. The following are my additions to the benefits of BDD from using it in real projects for my company. July 2013 on MSDN - Behavior-Driven Design with SpecFlow Scott Allen did a very informative TDD and MVC module, but to me he is doing BDDCompile and Execute Requirements in Microsoft .NET ~ Video from TechEd 2012 Communication I was working through a complicated task that the decision tree kept growing. After writing out the Given, When, Then of the scenario, I was able tell QA what I had worked through for their initial test cases. They were able to add from there. It is also useful to use this language with other developers, managers, or clients to help make informed decisions on if it meets the requirements or if it can simplified to save time (money). Thinking through solutions, before starting to code This was the biggest benefit to me. I like to jump into coding to figure out the problem. Many times I don't understand my path well enough and have to do some parts over. A past supervisor told me several times during reviews that I need to get better at seeing "the forest for the trees". When I sit down and write out the behavior that I need to implement, I force myself to think things out further and catch scenarios before they get to QA. A co-worker that is new to BDD and we’ve been using it in our new project for the last 6 months, said “It really clarifies things”. It took him awhile to understand it all, but now he’s seeing the value of this approach (yes there are some downsides, but that is a different issue). Developers’ Confidence This is huge for me. With tests in place, my confidence grows that I won’t break code that I’m not directly changing. In the past, I’ve worked on projects with out tests and we would frequently find regression bugs (or worse the users would find them). That isn’t fun. We don’t catch all problems with the tests, but when QA catches one, I can write a test to make sure it doesn’t happen again. It’s also good for Releasing code, telling your manager that it’s good to go. As time goes on and the code gets older, how confident are you that checking in code won’t break something somewhere else? Merging code - pre release confidence If you’re merging code a lot, it’s nice to have the tests to help ensure you didn’t merge incorrectly. Interrupted work I had a task that I started and planned out, then was interrupted for a month because of different priorities. When I started it up again, and un-shelved my changes, I had the BDD specs and it helped me remember what I had figured out and what was left to do. It would have much more difficult without the specs and tests. Testing and verifying complicated scenarios Sometimes in the UI there are scenarios that get tricky, because there are a lot of steps involved (click here to open the dialog, enter the information, make sure it’s valid, when I click cancel it should do {x}, when I click ok it should close and do {y}, then do this, etc….). With BDD I can avoid some of the mouse clicking define the scenarios and have them re-run quickly, without using a mouse. UI testing is still needed, but this helps a bunch. The same can be true for tricky server logic. Documentation of Assumptions and Specifications The BDD spec tests (Jasmine or SpecFlow or other tool) also work as documentation and show what the original developer was trying to accomplish. It’s not a different Word document, so developers will keep this up to date, instead of letting it become obsolete. What happens if you leave the project (consulting, new job, etc) with no specs or at the least good comments in the code? Sometimes I think of a new scenario, so I add a failing spec and continue in the same stream of thought (don’t forget it because it was on a piece of paper or in a notepad). Then later I can come back and handle it and have it documented. Jasmine tests and JavaScript –> help deal with the non-typed system I like JavaScript, but I also dislike working with JavaScript. I miss C# telling me if a property doesn’t actually exist at build time. I like the idea of TypeScript and hope to use it more in the future. I also use KnockoutJs, which has observables that need to be called with ending (), since the observable is a function. It’s hard to remember when to use () or not and the Jasmine specs/tests help ensure the correct usage.   This should give you an idea of the benefits that I see in using the BDD approach. I’m sure there are more. It talks a lot of practice, investment and experimentation to figure out how to approach this and to get comfortable with it. I agree with Scott Allen in the video I linked above “Remember that TDD can take some practice. So if you're not doing test-driven design right now? You can start and practice and get better. And you'll reach a point where you'll never want to get back.”

    Read the article

  • Different Not Automatically Implies Better

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/11/05/154556.aspxRecently I was digging deeper why some WCF hosted workflow application did consume quite a lot of memory although it did basically only load a xaml workflow. The first tool of choice is Process Explorer or even better Process Hacker (has more options and the best feature copy&paste does work). The three most important numbers of a process with regards to memory are Working Set, Private Working Set and Private Bytes. Working set is the currently consumed physical memory (parts can be shared between processes e.g. loaded dlls which are read only) Private Working Set is the physical memory needed by this process which is not shareable Private Bytes is the number of non shareable which is only visible in the current process (e.g. all new, malloc, VirtualAlloc calls do create private bytes) When you have a bigger workflow it can consume under 64 bit easily 500MB for a 1-2 MB xaml file. This does not look very scalable. Under 64 bit the issue is excessive private bytes consumption and not the managed heap. The picture is quite different for 32 bit which looks a bit strange but it seems that the hosted VB compiler is a lot less memory hungry under 32 bit. I did try to repro the issue with a medium sized xaml file (400KB) which does contain 1000 variables and 1000 if which can be represented by C# code like this: string Var1; string Var2; ... string Var1000; if (!String.IsNullOrEmpty(Var1) ) { Console.WriteLine(“Var1”); } if (!String.IsNullOrEmpty(Var2) ) { Console.WriteLine(“Var2”); } ....   Since WF is based on VB.NET expressions you are bound to the hosted VB.NET compiler which does result in (x64) 140 MB of private bytes which is ca. 140 KB for each if clause which is quite a lot if you think about the actually present functionality. But there is hope. .NET 4.5 does allow now C# expressions for WF which is a major step forward for all C# lovers. I did create some simple patcher to “cross compile” my xaml to C# expressions. Lets look at the result: C# Expressions VB Expressions x86 x86 On my home machine I have only 32 bit which gives you quite exactly half of the memory consumption under 64 bit. C# expressions are 10 times more memory hungry than VB.NET expressions! I wanted to do more with less memory but instead it did consume a magnitude more memory. That is surprising to say the least. The workflow does initialize in about the same time under x64 and x86 where the VB code does it in 2s whereas the C# version needs 18s. Also nearly ten times slower. That is a too high price to pay for any bigger sized xaml workflow to convert from VB.NET to C# expressions. If I do reduce the number of expressions to 500 then it does need 400MB which is about half of the memory. It seems that the cost per if does rise linear with the number of total expressions in a xaml workflow.  Expression Language Cost per IF Startup Time C# 1000 Ifs x64 1,5 MB 18s C# 500 Ifs x64 750 KB 9s VB 1000 Ifs x64 140 KB 2s VB 500 Ifs x64 70 KB 1s Now we can directly compare two MS implementations. It is clear that the VB.NET compiler uses the same underlying structure but it has much higher offset compared to the highly inefficient C# expression compiler. I have filed a connect bug here with a harsher wording about recent advances in memory consumption. The funniest thing is that one MS employee did give an Azure AppFabric demo around early 2011 which was so slow that he needed to investigate with xperf. He was after startup time and the call stacks with regards to VB.NET expression compilation were remarkably similar. In fact I only found this post by googling for parts of my call stacks. … “C# expressions will be coming soon to WF, and that will have different performance characteristics than VB” … What did he know Jan 2011 what I did no know until today? ;-). He knew that C# expression will come but that they will not be automatically have better footprint. It is about time to fix that. In its current state C# expressions are not usable for bigger workflows. That also explains the headline for today. You can cheat startup time by prestarting workflows so that the demo looks nice and snappy but it does hurt scalability a lot since you do need much more memory than necessary. I did find the stacks by enabling virtual allocation tracking within XPerf which is still the best tool out there. But first you need to look at your process to check where the memory is hiding: For the C# Expression compiler you do not need xperf. You can directly dump the managed heap and check with a profiler of your choice. But if the allocations are happening on the Private Data ( VirtualAlloc ) you can find it with xperf. There is a nice video on channel 9 explaining VirtualAlloc tracking it in greater detail. If your data allocations are on the Heap it does mean that the C/C++ runtime did create a heap for you where all malloc, new calls do allocate from it. You can enable heap tracing with xperf and full call stack support as well which is doable via xperf like it is shown also on channel 9. Or you can use WPRUI directly: To make “Heap Usage” it work you need to set for your executable the tracing flags (before you start it). For example devenv.exe HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\devenv.exe DWORD TracingFlags 1 Do not forget to disable it after you did complete profiling the process or it will impact the startup time quite a lot. You can with xperf attach directly to a running process and collect heap allocation information from a gone wild process. Very handy if you need to find out what a process was doing which has arrived in a funny state. “VirtualAlloc usage” does work without explicitly enabling stuff for a specific process and is always on machine wide. I had issues on my Windows 7 machines with the call stack collection and the latest Windows 8.1 Performance Toolkit. I was told that WPA from Windows 8.0 should work fine but I do not want to downgrade.

    Read the article

  • I'm a contract developer and I think I'm about to get screwed [closed]

    - by kagaku
    I do contract development on the side. You could say that I'm a contract developer? Considering I've only ever had one client I'd say that's not exactly the truth - more like I took a side job and needed some extra cash. It started out as a "rebuild our website and we'll pay you $10k" type project. Once that was complete (a bit over schedule, but certainly not over budget), the company hired me on as a "long term support" contractor. The contract is to go from March of this year, expiring on December 31st of this year - 10 months. Over which a payment is to be paid on the 30th of each month for a set amount. I've been fulfilling my end of the contract on all points - doing server maintenence, application and database changes, doing huge rush changes and pretty much just going above and beyond. Currently I'm in the middle of development of an iPhone mobile application (PhoneGap based) which is nearing completion (probably 3-4 weeks from submission). It has not been all peaches and flowers though. Each and every month when my paycheck comes due, there always seems to be an issue of sorts. These issues did not occur during the initial project, only during the support contract. The actual contract states that my check should be mailed out on the 30th of the month. I have received my check on time approximately once (on time being about 2-3 days within the 30th). I've received my paycheck as late as the 15th of the next month - over two weeks late. I've put up with it because I need the paycheck. There have been promises and promises of "we'll send it out on time next time! I promise" - only to receive it just as late the next month. When I ask about payment they give me a vibe like "why are you only worried about money?" - unfortunately I don't have the luxury of not worrying about money. The last straw was with my August payment, which should have been mailed on August 30th. I received it on September 12th. The reason for the delay? "USPS is delaying it man! we sent it out on the 1st!" is the reason I got. When I finally got the check in the mail, the postage on the envelope was marked September 10th - the date it was run through the postage machine. I've been outright lied to, at this point. I carry on working, because again - I need a paycheck. I orchestrated the move of our application to a new server, developed a bunch of new changes and continued work on the iPhone app. All told I probably went over my hourly allotment (I'm paid for 40 hours a month, I probably put in at least 50). On Saturday, the 1st, I gave the main contact at the company (a company of 3, by the way - this is not some big corporation) a ring and filled him in on the status of my work for the past two weeks. Unusually I hadn't heard from him since the middle of September. His response was "oh... well, that is nice and uh.. good job. well, we've been talking within the company about things and we've certainly got some decisions ahead of us..." - not verbatim but you get the idea (I hope?). I got out of this conversation that the site is not doing very well (which it's not) and they're considering pulling the plug. Crap, this contract is going to end early - there goes Christmas! Fine, that's alright, no problem. I'll get paid for the last months work and call it a day. Unfortunately I still haven't gotten last months check, and I'm getting dicked around now. "Oh.. we had problems transferring funds, we'll try and mail it out tomorrow" and "I left a VM with the finance guy, but I can't get ahold of him". So I'm getting the feeling I'm not getting paid for all the work I put in for September. This is obviously breach of contract, and I am pissed. Thinking irrationally, I considered changing all their passwords and holding their stuff hostage. Before I think it through (by the way, I am NOT going to do this, realized it would probably get me in trouble), I go and try some passwords for our various accounts. Google Apps? Oh, I'm no longer administrator here. Godaddy? Whoops, invalid password. Disqus? Nope, invalid password here too. Google Adsense / Analytics? Invalid password. Dedicated server account manager? Invalid password. Now, I have the servers root password - I just built the box last week and haven't had a chance to send the guy the root password. Wasn't in a rush, I manage the server and they never touch it. Now all of a sudden all the passwords except this one are changed; the writing is on the wall - I am out. Here's the conundrum. I have the root password, they do not. If I give them this password all the leverage I have is gone, out the door and out of my hands. During this argument of why am I not getting paid the guy sends me an email saying "oh by the way, what's the root username and password to the server?". Considering he knows absolutely nothing, I gave him an "admin" account which really has almost no rights. I still have exclusive access to the server, I just don't know where to go. I can hold their data hostage, but I'm almost positive this is the wrong thing to do. I'd consider it blackmail, regardless of whether or not I have gotten paid yet. I can "break" something on the server and give them the whole "well, if you were paying me I could fix it!" spiel. This works from a "well he's not holding their stuff hostage" point of view, but what stops them from hiring some one else to just fix the issue at hand? For all I know the guys nephew is a "l33t hax0r" and can figure it out for free. I can give in, document as much as I can and take him to small claims court. This is breach of contract, I'm not getting paid. I have a case, right? ???? Does anyone have any experience in this? What can I do? What are my options? I'm broke, I can't afford a lawyer and I can barely afford not getting this paycheck. My wife doesn't work (I work two jobs so she doesn't have to work - we have a 1 year old) and is already looking at getting a part time job to cover the bills. Long term we'll be fine, but this has pissed me off beyond belief! Help me out, I'm about to get screwed.

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • What's new in EJB 3.2 ? - Java EE 7 chugging along!

    - by arungupta
    EJB 3.1 added a whole ton of features for simplicity and ease-of-use such as @Singleton, @Asynchronous, @Schedule, Portable JNDI name, EJBContainer.createEJBContainer, EJB 3.1 Lite, and many others. As part of Java EE 7, EJB 3.2 (JSR 345) is making progress and this blog will provide highlights from the work done so far. This release has been particularly kept small but include several minor improvements and tweaks for usability. More features in EJB.Lite Asynchronous session bean Non-persistent EJB Timer service This also means these features can be used in embeddable EJB container and there by improving testability of your application. Pruning - The following features were made Proposed Optional in Java EE 6 and are now made optional. EJB 2.1 and earlier Entity Bean Component Contract for CMP and BMP Client View of an EJB 2.1 and earlier Entity Bean EJB QL: Query Language for CMP Query Methods JAX-RPC-based Web Service Endpoints and Client View The optional features are moved to a separate document and as a result EJB specification is now split into Core and Optional documents. This allows the specification to be more readable and better organized. Updates and Improvements Transactional lifecycle callbacks in Stateful Session Beans, only for CMT. In EJB 3.1, the transaction context for lifecyle callback methods (@PostConstruct, @PreDestroy, @PostActivate, @PrePassivate) are defined as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Unspecified Unspecified Unspecified Unspecified Singleton Bean's transaction management type Bean's transaction management type N/A N/A In EJB 3.2, stateful session bean lifecycle callback methods can opt-in to be transactional. These methods are then executed in a transaction context as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Bean's transaction management type Bean's transaction management type Bean's transaction management type Bean's transaction management type Singleton Bean's transaction management type Bean's transaction management type N/A N/A For example, the following stateful session bean require a new transaction to be started for @PostConstruct and @PreDestroy lifecycle callback methods. @Statefulpublic class HelloBean {   @PersistenceContext(type=PersistenceContextType.EXTENDED)   private EntityManager em;    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)   @PostConstruct   public void init() {        myEntity = em.find(...);   }   @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)    @PostConstruct    public void destroy() {        em.flush();    }} Notice, by default the lifecycle callback methods are not transactional for backwards compatibility. They need to be explicitly opt-in to be made transactional. Opt-out of passivation for stateful session bean - If your stateful session bean needs to stick around or it has non-serializable field then the bean can be opt-out of passivation as shown. @Stateful(passivationCapable=false)public class HelloBean {    private NonSerializableType ref = ... . . .} Simplified the rules to define all local/remote views of the bean. For example, if the bean is defined as: @Statelesspublic class Bean implements Foo, Bar {    . . .} where Foo and Bar have no annotations of their own, then Foo and Bar are exposed as local views of the bean. The bean may be explicitly marked @Local as @Local@Statelesspublic class Bean implements Foo, Bar {    . . .} then this is the same behavior as explained above, i.e. Foo and Bar are local views. If the bean is marked @Remote as: @Remote@Statelesspublic class Bean implements Foo, Bar {    . . .} then Foo and Bar are remote views. If an interface is marked @Local or @Remote then each interface need to be explicitly marked explicitly to be exposed as a view. For example: @Remotepublic interface Foo { . . . }@Statelesspublic class Bean implements Foo, Bar {    . . .} only exposes one remote interface Foo. Section 4.9.7 from the specification provide more details about this feature. TimerService.getAllTimers is a newly added convenience API that returns all timers in the same bean. This is only for displaying the list of timers as the timer can only be canceled by its owner. Removed restriction to obtain the current class loader, and allow to use java.io package. This is handy if you want to do file access within your beans. JMS 2.0 alignment - A standard list of activation-config properties is now defined destinationLookup connectionFactoryLookup clientId subscriptionName shareSubscriptions Tons of other clarifications through out the spec. Appendix A provide a comprehensive list of changes since EJB 3.1. ThreadContext in Singleton is guaranteed to be thread-safe. Embeddable container implement Autocloseable. A complete replay of Enterprise JavaBeans Today and Tomorrow from JavaOne 2012 can be seen here (click on CON4654_mp4_4654_001 in Media). The specification is still evolving so the actual property or method names or their actual behavior may be different from the currently proposed ones. Are there any improvements that you'd like to see in EJB 3.2 ? The EJB 3.2 Expert Group would love to hear your feedback. An Early Draft of the specification is available. The latest version of the specification can always be downloaded from here. Java EE 7 Specification Status EJB Specification Project JIRA of EJB Specification JSR Expert Group Discussion Archive These features will start showing up in GlassFish 4 Promoted Builds soon.

    Read the article

  • Are you reporting Visual Studio 2012 issues to Microsoft correctly?

    - by Tarun Arora
    Issues you may run into while using Visual Studio need to be reported to the Microsoft Product Team via the Microsoft connect site. The Microsoft team then tries to reproduce the issue using the details provided by you. If the information you provide isn’t sufficient to reproduce the issue the team tries to contact you for specifics, this not only increases the cycle time to resolution but the lack of communication also results in issues not being resolved. So, when I report an issue one part of me tells me to include as much detail about the issue as I can clubbing screen shots, repo steps, system information, visual studio version information,… the other half tells me this is so time consuming, leave it for now and come back to fill all these details later. Reporting a bug but not including the supporting information is an invitation to excuses like …     Microsoft has absolutely changed this experience for VS 2012. The Microsoft Visual Studio Feedback tool is designed to simplify the process of providing feedback and reporting issues to Microsoft that you may encounter while using Microsoft Visual Studio 2012. Note – The Microsoft Visual Studio 2012 Feedback client currently only works for VS 2012 and not any other versions of Visual Studio. Setting up the Microsoft Visual Studio 2012 Feedback client Open Visual Studio, from the Tools menu select Extension and Updates. In the Extension and Updates window, click Online from the left pane and search using the text ‘feedback’, download and Install Microsoft Visual Studio 2012 Feedback Tool by following the instructions from the wizard. Note - Restarting Visual Studio after the install is a must! How to report a bug for Visual Studio 2012? Click on the Help menu and choose Report a Bug You should see an icon Microsoft Visual Studio 2012 Feedback Tool come up in the system tray icon area You’ll need to accept the Privacy statement. You have the option of reporting the feedback as private or public. Microsoft works with several Partners, MVP’s and Vendors who get access to early bits of Microsoft products for valuation. This is where it becomes essential to report the feedback privately. I would choose the Public option otherwise. After all if it’s out there in the public, others can discover and add to it easily. You now have the option to report a new issue or add to an existing issue. Should you choose to add to an existing issue you should have the feedback ID of the issue available. This can be obtained from the Microsoft Connect site. For now I am going to focus on reporting a new feedback privately. Filling out the feedback details You will notice that VsInfo.xml and DxDiagOutput.txt are automatically attached as you enter this screen (more on that later).  Feedback Type Choose the feedback type from (Performance, Hang, Crash, Other) Note – The record button will only be enabled once you have enabled once you have chosen the feedback type, Bug-repro recording is not available for Windows Server 2008.     Effective Title and Description Enter a title that helps us differentiate the bug when it appears in a list, so that we can group it with any related bugs, assign it to a developer more effectively, and resolve it more quickly. Example: Imagine that you are submitting a bug because you tried to install Service Pack 1 and got a message that Visual Studio is not installed even though it is. Helpful:  Installed Visual Studio version not detected during Service Pack 1 setup. Not helpful:  Service Pack 1 problem. Tip: Write the problem description first, and then distil it to create a title. Example Description: Helpful: When I run Service Pack 1 Setup, I get the message "No Visual Studio version is detected" even though I have Visual Studio 2010 Ultimate and Visual C++ 2010 Express installed on my machine. Even though I uninstalled both editions, and then first reinstalled Ultimate and then Express, I still get the message. Record: Becoming a first class citizen Often a repro report is invaluable to describe and decipher the issue. Please use this feature to send actionable feedback. The record repro feature works differently depending on the feedback type you selected. Please find below details for each recording option. You can start recording simply by selecting a feedback type, and clicking on the “Record” button. When "Performance" is the bug type: When the Microsoft Visual Studio trace recorder starts, perform the actions that show the performance problem you want to report and then click on the "Stop Recording" button as soon as you experience the performance problem. Because the tool optimizes trace collection, you can run it for as long as it takes to show the problem, up to two hours. Note that, you need to stop recording as soon as the performance issue occurs, because the tool captures only the last couple minutes of your actions to optimize the trace collection. After you stop the recording, the tool takes up to two minutes to assemble the data and attach an ETLTrace.zip file to your bug report. The data includes information about Windows events and the Visual Studio code path. Note that, running the Microsoft Visual Studio trace recorder requires elevated user privilege. When "Crash" is the bug type: When the dialog box appears, select the running Visual Studio instance for which you want to show the steps that cause a crash. When the crash occurs, click on the "Stop Record" button. After you do this, two files are attached to your bug report - an AutomaticCrashDump.zip file that contains information about the crash and a ReproSteps.zip file that shows the repro steps. Repro steps are captured by Windows Problem Steps Recorder. Note that, you can pause the recording, and resume later, or for a specific step, you can add additional comments. When "Hang" is the bug type: The process for recording the steps that cause a hang resembles the one for crashes. The difference is, you can even collect a dump file after the VS hangs; start the VSFT either from the system tray or by starting a new instance of VS, select "Hang" as feedback type and click on the "Record" button. You will be prompted which VS to collect dump about, select the VS instance that hanged. VSFT collects a dump file regarding the hang, called MiniDump.zip, and attaches to your bug report. When "Other" is the bug type: When the problem step recorder starts, perform the actions that show the issue you want to report and then choose the "Stop” button. You can pause the recording, and resume later, or for a specific step, you can add additional comments. Once you’re done, ReproSteps.zip is added to your bug report. Pre-attached files It is essential for Microsoft to know what version of the the product are you currently using and what is the current configuration of your system. Note – The total size of all attachments in a bug report cannot exceed 2 GB, and every uncompressed attachment must be smaller than 512 MB. We recommend that you assemble all of your attachments, compress them together into a .zip file, and then attach the .zip file. Taking a screenshot Associate a screen shot by clicking the Take screenshot button, choose either the entire desktop, the specific monitor (useful if you are working in a multi monitor configuration) or the specific window in question. And finally … click Submit If you need further help, more details can be found here. You can view your feedback online by using the following URL “">https://connect.microsoft.com/VisualStudio/SearchResults.aspx?SearchQuery=<feedbackId>” Happy bug logging

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • Making Thunar the default file browser without hiding the desktop icons

    - by Manu
    I really dislike Ubuntu's default file browser, nautilus, and decided to opt for a lighter alternative (Thunar or Xfe). I've used the following script to change the default to Thunar, but now all my icons are gone from the desktop ! The files are still there, in /home/myid/Desktop, but they do not appear. Is there a way to show them, or is this a consequence of removing nautilus as the default file browser ? Can I modify the following script* in order to keep the icons ? *copied from https://help.ubuntu.com/...: ## Originally written by aysiu from the Ubuntu Forums ## This is GPL'ed code ## So improve it and re-release it ## Define portion to make Thunar the default if that appears to be the appropriate action makethunardefault() { ## I went with --no-install-recommends because ## I didn't want to bring in a whole lot of junk, ## and Jaunty installs recommended packages by default. echo -e "\nMaking sure Thunar is installed\n" sudo apt-get update && sudo apt-get install thunar --no-install-recommends ## Does it make sense to change to the directory? ## Or should all the individual commands just reference the full path? echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nMaking backup directory\n" ## Does it make sense to create an entire backup directory? ## Should each file just be backed up in place? sudo mkdir nonautilusplease echo -e "\nModifying folder handler launcher\n" sudo cp nautilus-folder-handler.desktop nonautilusplease/ ## Here I'm using two separate sed commands ## Is there a way to string them together to have one ## sed command make two replacements in a single file? sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-folder-handler.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-folder-handler.desktop echo -e "\nModifying browser launcher\n" sudo cp nautilus-browser.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop --browser/thunar/g' nautilus-browser.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-browser.desktop echo -e "\nModifying computer icon launcher\n" sudo cp nautilus-computer.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-computer.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-computer.desktop echo -e "\nModifying home icon launcher\n" sudo cp nautilus-home.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-home.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-home.desktop echo -e "\nModifying general Nautilus launcher\n" sudo cp nautilus.desktop nonautilusplease/ sudo sed -i -n 's/Exec=nautilus/Exec=thunar/g' nautilus.desktop ## This last bit I'm not sure should be included ## See, the only thing that doesn't change to the ## new Thunar default is clicking the files on the desktop, ## because Nautilus is managing the desktop (so technically ## it's not launching a new process when you double-click ## an icon there). ## So this kills the desktop management of icons completely ## Making the desktop pretty useless... would it be better ## to keep Nautilus there instead of nothing? Or go so far ## as to have Xfce manage the desktop in Gnome? echo -e "\nChanging base Nautilus launcher\n" sudo dpkg-divert --divert /usr/bin/nautilus.old --rename /usr/bin/nautilus && sudo ln -s /usr/bin/thunar /usr/bin/nautilus echo -e "\nRemoving Nautilus as desktop manager\n" killall nautilus echo -e "\nThunar is now the default file manager. To return Nautilus to the default, run this script again.\n" } restorenautilusdefault() { echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nRestoring backup files\n" sudo cp nonautilusplease/nautilus-folder-handler.desktop . sudo cp nonautilusplease/nautilus-browser.desktop . sudo cp nonautilusplease/nautilus-computer.desktop . sudo cp nonautilusplease/nautilus-home.desktop . sudo cp nonautilusplease/nautilus.desktop . echo -e "\nRemoving backup folder\n" sudo rm -r nonautilusplease echo -e "\nRestoring Nautilus launcher\n" sudo rm /usr/bin/nautilus && sudo dpkg-divert --rename --remove /usr/bin/nautilus echo -e "\nMaking Nautilus manage the desktop again\n" nautilus --no-default-window & ## The only change that isn't undone is the installation of Thunar ## Should Thunar be removed? Or just kept in? ## Don't want to load the script with too many questions? } ## Make sure that we exit if any commands do not complete successfully. ## Thanks to nanotube for this little snippet of code from the early ## versions of UbuntuZilla set -o errexit trap 'echo "Previous command did not complete successfully. Exiting."' ERR ## This is the main code ## Is it necessary to put an elseif in here? Or is ## redundant, since the directory pretty much ## either exists or it doesn't? ## Is there a better way to keep track of whether ## the script has been run before? if [[ -e /usr/share/applications/nonautilusplease ]]; then restorenautilusdefault else makethunardefault fi;

    Read the article

  • Skipping scheduled self-tests and predicting drive EOL

    - by Steve Madsen
    For a few weeks now, smartd has been reporting that it is skipping some of its scheduled self-tests on the weekends: Apr 24 18:29:32 calvin smartd[4758]: Device: /dev/sda, skip scheduled Offline Immediate Test; 40% remaining of current Self-Test. Apr 24 18:29:33 calvin smartd[4758]: Device: /dev/sdb, skip scheduled Offline Immediate Test; 50% remaining of current Self-Test. The drives in this RAID-1 array are set to run an offline test four times a day, a short self-test at 2am every day, and a long self-test on Saturdays at 2am. For some reason, it looks like the long self-test is taking longer, causing the other scheduled tests to be skipped. First question: is this a sign of likely drive failure? Then today, smartd reported that a self-test failed. Here is the output of smartctl -a /dev/sdb: smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3250823AS Serial Number: 3ND1GNBC Firmware Version: 3.03 User Capacity: 250,059,350,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Apr 25 13:15:34 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 84) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 047 039 006 Pre-fail Always - 168450357 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 33 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 9 7 Seek_Error_Rate 0x000f 087 060 030 Pre-fail Always - 654745480 9 Power_On_Hours 0x0032 055 055 000 Old_age Always - 40141 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 51 194 Temperature_Celsius 0x0022 037 062 000 Old_age Always - 37 (0 17 0 0) 195 Hardware_ECC_Recovered 0x001a 047 039 000 Old_age Always - 168450357 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40131 - # 2 Extended offline Completed: read failure 30% 40129 379795511 # 3 Short offline Completed without error 00% 40084 - # 4 Short offline Completed without error 00% 40060 - # 5 Short offline Completed without error 00% 40036 - # 6 Short offline Completed without error 00% 40013 - # 7 Short offline Completed without error 00% 39990 - # 8 Extended offline Completed without error 00% 39977 - # 9 Short offline Completed without error 00% 39919 - #10 Short offline Completed without error 00% 39895 - #11 Short offline Completed without error 00% 39872 - #12 Short offline Completed without error 00% 39848 - #13 Short offline Completed without error 00% 39824 - #14 Short offline Completed without error 00% 39801 - #15 Extended offline Completed without error 00% 39789 - #16 Short offline Completed without error 00% 39754 - #17 Short offline Completed without error 00% 39732 - #18 Short offline Completed without error 00% 39707 - #19 Short offline Completed without error 00% 39683 - #20 Short offline Completed without error 00% 39660 - #21 Short offline Completed without error 00% 39636 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Given that this drive is about 4.5 years old, I am probably tempting fate by keeping it in service. SMART doesn't seem to get much respect as a reliable way to predict drive failure. What else can I use to get an early indication of drive failure?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >