Search Results

Search found 19844 results on 794 pages for 'software obsolescence'.

Page 204/794 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • PHP convert latin1 to utf8 Persian txt

    - by root
    I now work on a web-base PHP app to work with a MySQL Server database . database is with Latin1 Character set and all Persian texts don't show properly . database is used with a windows software Show and Save Persian texts good . I don't want to change the charset because windows software work with that charset . Question: how can convert latin1 to utf8 to show and utf8 to latin1 for saving from my web-base PHP app , or using Persian/Arabic language on a latin1 charset database without problem ? note: one of my texts is ???? ?????? when save from my windows-based software save as ÇÍãÏ ÑÍãÇäí and still show with ???? ?????? in my old windows-based software image : image of database , charsets,collation and windows-based software

    Read the article

  • What's the environment variable for the path to the desktop?

    - by Scott Langham
    I'm writing a Windows batch file and want to copy something to the desktop. I think I can use this: %UserProfile%\Desktop\ However, I'm thinking, that's probably only going to work on an English OS. Is there a way I can do this in a batch file that will work on any internationalized version? UPDATE I tried the following batch file: REG QUERY "HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders" /v Desktop FOR /F "usebackq tokens=3 skip=4" %%i in (`REG QUERY "HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders" /v Desktop`) DO SET DESKTOPDIR=%%i FOR /F "usebackq delims=" %%i in (`ECHO %DESKTOPDIR%`) DO SET DESKTOPDIR=%%i ECHO %DESKTOPDIR% And got this output: S:\REG QUERY "HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders" /v Desktop HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders Desktop REG_EXPAND_SZ %USERPROFILE%\Desktop S:\FOR /F "usebackq tokens=3 skip=4" %i in (`REG QUERY "HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folder s" /v Desktop`) DO SET DESKTOPDIR=%i S:\FOR /F "usebackq delims=" %i in (`ECHO ECHO is on.`) DO SET DESKTOPDIR=%i S:\SET DESKTOPDIR=ECHO is on. S:\ECHO ECHO is on. ECHO is on.

    Read the article

  • What's My Problem? What's Your Problem?

    - by Jacek Ziabicki
    Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations. When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments. Different Objectives Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments. This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work. When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone. Welcome to the Wonder Machine The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development! I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

    Read the article

  • Do you know Best Practise and Design Patterns for Adobe Air/Flex Applications?

    - by Julian
    I'm going to write an application with the Air/Flex-Framework. I'm looking for Best Practise and general Design Patterns for designing software especially in Air/Flex. I have experience with this framework but never had the pleasure to write a piece of software from scratch. For instance: I stumbled across lots of software written in Air/Flex with nearly infinity global vars :-) Most of the software I saw was not object-oriented How can I pack the asynchronous method calls nicely? I'm familiar with general design patterns by gamma. I'm looking more for advise in designing good quality software with Adobe Air/Flex.

    Read the article

  • Is there any Application Server Frameworks for other languages/platforms than JavaEE and .NET?

    - by Jonas
    I'm a CS student and has rare experience from the enterprise software industry. When I'm reading about enterprise software platforms, I mostly read about these two: Java Enterprise Edition, JavaEE .NET and Windows Communication Foundation By "enterprise software platforms" I mean frameworks and application servers with support for the same characteristics as J2EE and WCF has: [JavaEE] provide functionality to deploy fault-tolerant, distributed, multi-tier Java software, based largely on modular components running on an application server. WCF is designed in accordance with service oriented architecture principles to support distributed computing where services are consumed by consumers. Clients can consume multiple services and services can be consumed by multiple clients. Services are loosely coupled to each other. Is there any alternatives to these two "enterprise software platforms"? Isn't any other programming languages used in a bigger rate for this problem area? I.e Why isn't there any popular application servers for C++/Qt?

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • LTO(4) tape shelf life estimation?

    - by emilp
    LTO tapes, Maxell in this case, are often marketed as having 30 years or more shelf life when stored under "optimal conditions" Is there a way to get a good estimation to the shelf life, given parameters such as relative humidity and temperature etc? Obsolescence of the tapes aside, is there a way of determining the impact to shelf life of any deviance from the optimal. In other words how many years are lost when storing say 1 degree above the specified range? regards Emil

    Read the article

  • How Exactly Is One Linux OS “Based On” Another Linux OS?

    - by Jason Fitzpatrick
    When reviewing different flavors of Linux, you’ll frequently come across phrases like “Ubuntu is based on Debian” but what exactly does that mean? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader PLPiper is trying to get a handle on how Linux variants work: I’ve been looking through quite a number of Linux distros recently to get an idea of what’s around, and one phrase that keeps coming up is that “[this OS] is based on [another OS]“. For example: Fedora is based on Red Hat Ubuntu is based on Debian Linux Mint is based on Ubuntu For someone coming from a Mac environment I understand how “OS X is based on Darwin”, however when I look at Linux Distros, I find myself asking “Aren’t they all based on Linux..?” In this context, what exactly does it mean for one Linux OS to be based on another Linux OS? So, what exactly does it mean when we talk about one version of Linux being based off another version? The Answer SuperUser contributor kostix offers a solid overview of the whole system: Linux is a kernel — a (complex) piece of software which works with the hardware and exports a certain Application Programming Interface (API), and binary conventions on how to precisely use it (Application Binary Interface, ABI) available to the “user-space” applications. Debian, RedHat and others are operating systems — complete software environments which consist of the kernel and a set of user-space programs which make the computer useful as they perform sensible tasks (sending/receiving mail, allowing you to browse the Internet, driving a robot etc). Now each such OS, while providing mostly the same software (there are not so many free mail server programs or Internet browsers or desktop environments, for example) differ in approaches to do this and also in their stated goals and release cycles. Quite typically these OSes are called “distributions”. This is, IMO, a somewhat wrong term stemming from the fact you’re technically able to build all the required software by hand and install it on a target machine, so these OSes distribute the packaged software so you either don’t need to build it (Debian, RedHat) or they facilitate such building (Gentoo). They also usually provide an installer which helps to install the OS onto a target machine. Making and supporting an OS is a very complicated task requiring a complex and intricate infrastructure (upload queues, build servers, a bug tracker, and archive servers, mailing list software etc etc etc) and staff. This obviously raises a high barrier for creating a new, from-scratch OS. For instance, Debian provides ca. 37k packages for some five hardware architectures — go figure how much work is put into supporting this stuff. Still, if someone thinks they need to create a new OS for whatever reason, it may be a good idea to use an existing foundation to build on. And this is exactly where OSes based on other OSes come into existence. For instance, Ubuntu builds upon Debian by just importing most packages from it and repackaging only a small subset of them, plus packaging their own, providing their own artwork, default settings, documentation etc. Note that there are variations to this “based on” thing. For instance, Debian fosters the creation of “pure blends” of itself: distributions which use Debian rather directly, and just add a bunch of packages and other stuff only useful for rather small groups of users such as those working in education or medicine or music industry etc. Another twist is that not all these OSes are based on Linux. For instance, Debian also provide FreeBSD and Hurd kernels. They have quite tiny user groups but anyway. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • How To Quickly Reboot Directly from Windows 7 to XP, Vista, or Ubuntu

    - by The Geek
    One of the biggest annoyances with a dual-boot system is having to wait for your PC to reboot to select the operating system you want to switch to, but there’s a simple piece of software that can make this process easier. This guest article was written by Ryan Dozier from the Doztech tech blog. With a small piece of software called iReboot we can skip the above step all together and instantly reboot into the operating system we want right from Windows. Their description says: “Instead of pressing restart, waiting for Windows to shut down, waiting for your BIOS to post, then selecting the operating system you want to boot into (within the bootloader time-limit!); you just select that entry from iReboot and let it do the rest!” Don’t worry about iReboot reconfiguring  your bootloader or any dual boot configuration you have. iReboot will only boot the selected operating system once and go back to your default settings. Using iReboot iReboot is quick and easy to install. Just download it, link below, run through the setup and select the default configuration. iReboot will automatically figure out what operating systems you have installed and appear in the taskbar. Go over to the taskbar and right click on the iReboot icon and select which operating system you want to reboot into. This method will add a check mark on the operating system you want to boot into. On your next reboot the system will automatically load your choice and skip the Windows Boot Manager. If you want to reboot automatically just select “Reboot on Selection” in the iReboot menu.   To be even more productive, you can install iReboot into each Windows operating system to quickly access the others with a few simple clicks.   iReboot does not work in Linux so you will have to reboot manually. Then wait for the Windows Boot Manager to load and select your operating system.   Conclusion iReboot works on  Windows XP, Windows Vista,  and Windows 7 as well as 64 bit versions of these operating systems. Unfortunately iReboot is only available for Windows but you can still use its functionality in Windows to quickly boot up your Linux machine. A simple reboot in Linux will take you back to Windows Boot Manager. Download iReboot from neosmart.net Editor’s note: We’ve not personally tested this software over at How-To Geek, but Neosmart, the author of the software, generally makes quality stuff. Still, you might want to test it out on a test machine first. If you’ve got any experience with this software, please be sure to let your fellow readers know in the comments. Similar Articles Productive Geek Tips Restart the Ubuntu Gnome User Interface QuicklyKeyboard Ninja: 21 Keyboard Shortcut ArticlesTest Your Computer’s Memory Using Windows Vista Memory Diagnostic ToolEnable or Disable UAC From the Windows 7 / Vista Command LineSet Windows as Default OS when Dual Booting Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • Beginner Geek: Scan Files for Viruses Before Using Them

    - by Mysticgeek
    To help avoid getting your computer infected by malicious software, it’s a good idea to scan files before executing them. Today we take a look at a couple of options that will let you scan files easily from your desktop. Scan File with Your Antivirus Software Most Antivirus software will put an option in the context menu so you can scan individual files. After downloading a file or email attachment, simply right-click the file and select the option to scan with your Antivirus software. If you want to scan more than one at a time, hold down the Ctrl key while you clicking each file you want to scan. Then right-click and select to scan with your Antivirus software. Here is our favorite Antivirus app, Microsoft Security Essentials scanning a couple of files. If a virus is found, your Antivirus app will delete it or put it in Quarantine so it cannot infect your system. Using VirusTotal Uploader To be very thorough and want a second opinion (actually 41), then you might want to check out the VirusTotal Uploader. This handy app will scan your files with 41 different Antivirus apps online. After installing VirusTotal Uploader, right-click the file, go to Send To, then VirusTotal. Alternately you can launch VirusTotal Uploader and Get and upload the file. It will send the file to VirusTotal.com and scan it with 41 different Antivirus apps and show you the results.   If you don’t want to install the Uploader, you can go to the VirusTotal site and upload a file from there to scan. We’ve noticed that occasionally there will be a false positive detected on files we know are clean. Sometimes the definition database of an Anti-malware app isn’t current, or an obscure Antivirus App will find something questionable. If that is the case, use your best judgment when viewing the results. Conclusion Most Antivirus apps today have real-time scanning and should be able to detect possible infections before you’re able to execute them. However, if they don’t or when in doubt, following these tips can save you a lot of headaches in the long run. If you use a lot of different flash drives throughout the day, check out our article on how to scan a thumb drive for viruses from the AutoPlay Dialog. Download Microsoft Security Essentials Download VirusTotal Uploader VirusTotal Website Similar Articles Productive Geek Tips Scan Files for Viruses Before You Download With Dr.WebMake Microsoft Security Essentials Scan Faster by Excluding Certain File TypesBeginner Geek: Delete User Accounts in Windows 7Scan Your Thumb Drive for Viruses from the AutoPlay DialogSecure Computing: Free Anti-Virus Protection With AVG Free Edition TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • Free tools versus paid tools.

    - by Dennis Vroegop
    We live in a strange world. Information should be free. Tools should be free. Software should be free (and I mean free as in free beer, not as in free speech). Of course, since I make my living (and pay my mortgage) by writing software I tend to disagree. Or rather: I want to get paid for the things I do in the daytime. Next to that I also spend time on projects I feel are valuable for the community, which I do for free. The reason I can do that is because I get paid enough in the daytime to afford that time. It gives me a good feeling, I help others and it’s fun to do. But the baseline is: I get paid to write software. I am sure this goes for a lot of other developers. We get paid for what we do during the daytime and spend our free time giving back. So why does everyone always make a fuzz when a company suddenly starts to charge for software? To me, this seems like a very reasonable decision. Companies need money: they have staff to pay, buildings to rent, coffee to buy, etc. All of this doesn’t come free so it makes sense that they charge their customers for the things they produce. I know there’s a very big Open Source market out there, where companies give away (parts of) their software and get revenue out of the services they provide. But this doesn’t work if your product doesn’t need services. If you build a great tool that is very easy to use, and you give it away for free you won’t get any money by selling services that no user of your tool really needs. So what do you do? You charge money for your tool. It’s either that or stop developing the tool and turn to other, more profitable projects. Like it or not, that’s simple economics at work. You have something other people want, so you charge them for it. This week it was announced that what I believe is the most used tool for .net developers (besides Visual Studio of course),namely Red Gates .net reflector, will stop being a free tool. They will charge you $35 for the next version. Suddenly twitter was on fire and everyone was mad about it. But why? The tool is downloaded by so many developers that it must be valuable to them. I know of no serious .net developer who hasn’t got it on his or her machine. So apparently the tool gives them something they need. So why do they expect it to be free? There are developers out there maintaining and extending the tool, building new and better versions of it. And the price? $35 doesn’t seem much. If I think of the time the tool saved me the 35 dollars were earned back in a day. If by spending this amount of money I can rely on great software that helps me do my job better and faster, I have no problems by spending it. I know that there is a great team behind it, (the Red Gate tools are a must have when developing SQL systems, for instance), and I do believe they are in their right to charge this. So.. there you have it. This is of course, my opinion. You may think otherwise. Please let me know in the comments what you think! Tags van Technorati: redgate,reflector,opensource

    Read the article

  • IPgallery banks on Solaris SPARC

    - by Frederic Pariente
    IPgallery is a global supplier of converged legacy and Next Generation Networks (NGN) products and solutions, including: core network components and cloud-based Value Added Services (VAS) for voice, video and data sessions. IPgallery enables network operators and service providers to offer advanced converged voice, chat, video/content services and rich unified social communications in a combined legacy (fixed/mobile), Over-the-Top (OTT) and Social Community (SC) environments for home and business customers. Technically speaking, this offer is a scalable and robust telco solution enabling operators to offer new services while controlling operating expenses (OPEX). In its solutions, IPgallery leverages the following Oracle components: Oracle Solaris, Netra T4 and SPARC T4 in order to provide a competitive and scalable solution without the price tag often associated with high-end systems. Oracle Solaris Binary Application Guarantee A unique feature of Oracle Solaris is the guaranteed binary compatibility between releases of the Solaris OS. That means, if a binary application runs on Solaris 2.6 or later, it will run on the latest release of Oracle Solaris.  IPgallery developed their application on Solaris 9 and Solaris 10 then runs it on Solaris 11, without any code modification or rebuild. The Solaris Binary Application Guarantee helps IPgallery protect their long-term investment in the development, training and maintenance of their applications. Oracle Solaris Image Packaging System (IPS) IPS is a new repository-based package management system that comes with Oracle Solaris 11. It provides a framework for complete software life-cycle management such as installation, upgrade and removal of software packages. IPgallery leverages this new packaging system in order to speed up and simplify software installation for the R&D and production environments. Notably, they use IPS to deliver Solaris Studio 12.3 packages as part of the rapid installation process of R&D environments, and during the production software deployment phase, they ensure software package integrity using the built-in verification feature. Solaris IPS thus improves IPgallery's time-to-market with a faster, more reliable software installation and deployment in production environments. Extreme Network Performance IPgallery saw a huge improvement in application performance both in CPU and I/O, when running on SPARC T4 architecture in compared to UltraSPARC T2 servers.  The same application (with the same activation environment) running on T2 consumes 40%-50% CPU, while it consumes only 10% of the CPU on T4. The testing environment comprised of: Softswitch (Call management), TappS (Telecom Application Server) and Billing Server running on same machine and initiating various services in capacity of 1000 CAPS (Call Attempts Per Second). In addition, tests showed a huge improvement in the performance of the TCP/IP stack, which reduces network layer processing and in the end Call Attempts latency. Finally, there is a huge improvement within the file system and disk I/O operations; they ran all tests with maximum logging capability and it didn't influence any benchmark values. "Due to the huge improvements in performance and capacity using the T4-1 architecture, IPgallery has engineered the solution with less hardware.  This means instead of deploying the solution on six T2-based machines, we will deploy on 2 redundant machines while utilizing Oracle Solaris Zones and Oracle VM for higher availability and virtualization" Shimon Lichter, VP R&D, IPgallery In conclusion, using the unique combination of Oracle Solaris and SPARC technologies, IPgallery is able to offer solutions with much lower TCO, while providing a higher level of service capacity, scalability and resiliency. This low-OPEX solution enables the operator, the end-customer, to deliver a high quality service while maintaining high profitability.

    Read the article

  • DotNetNuke 7.0 Only Weeks Away!

    - by sbwalker
    The software industry moves at a lightning pace, and it is only through constant focus and continuous investment that a software product can remain both stable and relevant over the long term. As we approach the 10 Year Anniversary of the DotNetNuke platform, it seems only fitting that we are on the verge of announcing yet another significant product milestone. DotNetNuke 7.0 is just around the corner and represents a bold step forward for our Content Management Platform, including substantial business productivity enhancements, investments in web platform relevance, and a significant overhaul and modernization of the user interface and user experience. It has been five months since I posted the announcement that the next major version of the platform was going to be DotNetNuke 7.0.  This announcement created tremendous excitement and anticipation in the DotNetNuke community, as major version increments have always been utilized as an opportunity  to introduce revolutionary new product features and capabilities. After months of intense product development, the finish line is finally in sight. With that, I am pleased to announce that we released a Release Candidate (RC) of DotNetNuke 7.0 yesterday. You can download the RC from our project page on Codeplex. A Release Candidate represents a software version which is very near to “release” quality. So although we will not be officially endorsing the RC for production use, or providing an official upgrade path, it does represent a significant milestone in our software development efforts ( if you are looking for a more detailed explanation of our software release terminology, I would encourage you to read the blog written by Co-Founder, Joe Brinkman titled "What's In A Name?" ). Modernizing a software platform does have its share of challenges from a backward compatibility perspective and, as usual, we are taking great care in ensuring a seamless upgrade path for our customers. In order to remain relevant and progressive, you need to be aware that DotNetNuke 7.0 has adopted a new set of baseline infrastructure requirements including ASP.NET 4.0.  As a result we are encouraging all major stakeholders in the ecosystem ( module developers, designers, partners, customers, etc... ) to take the opportunity to install the RC in their own local environments. This is the last opportunity to let us know about any final issues which may need to be addressed prior to final release. Mark your calendars now… the expected public release date (RTM) for DotNetNuke 7.0 will be Wednesday, November 28th. On a side note, we expect to release a 6.2.5 Maintenance version today. This release contains some high priority product quality improvements as well as security patches for some vulnerabilities reported through our standard ecosystem channels. As a result we will be encouraging all of our customers to upgrade to the 6.2.5 release as soon as it is available. I hope everyone is as excited as I am about the upcoming DotNetNuke 7.0 release. Please take the opportunity over the next week to put the new platform through its paces. Remember, only through our collective efforts can we ensure that this release has the greatest market impact of any DotNetNuke release to date.

    Read the article

  • Do you want to be an ALM Consultant?

    - by Martin Hinshelwood
    Northwest Cadence is looking for our next great consultant! At Northwest Cadence, we have created a work environment that emphasizes excellence, integrity, and out-of-the-box thinking.  Our customers have high expectations (rightfully so) and we wouldn’t have it any other way!   Northwest Cadence has some of the most exciting customers I have ever worked with and even though I have only been here just over a month I have already: Provided training/consulting for 3 government departments Created and taught courseware for delivering Scrum to teams within a high profile multinational company Started presenting Microsoft's ALM Engagement Program  So if you are interested in helping companies build better software more efficiently, then.. Enquire at [email protected] Application Lifecycle Management (ALM) Consultant An ALM Consultant with a minimum of 8 years of relevant experience with Application Lifecycle Management, Visual Studio (including Visual Studio Team System) and software design is needed. Must provide thought leadership on best practices for enterprise architecture, understand the Microsoft technology solution stack, and have a thorough understanding of enterprise application integration. The ALM Practice Lead will play a central role in designing and implementing the overall ALM Practice strategy, including creating, updating, and delivering ALM courseware and consultancy engagements. This person will also provide project support, deliverables, and quality solutions on Visual Studio Team System that exceed client expectations. Engagements will vary and will involve providing expert training, consulting, mentoring, formulating technical strategies and policies and acting as a “trusted advisor” to customers and internal teams. Sound sense of business and technical strategy required. Strong interpersonal skills as well as solid strategic thinking are key. The ideal candidate will be capable of envisioning the solution based on the early client requirements, communicating the vision to both technical and business stakeholders, leading teams through implementation, as well as training, mentoring, and hands-on software development. The ideal candidate will demonstrate successful use of both agile and formal software development methods, enterprise application patterns, and effective leadership on prior projects. Job Requirements Minimum Education: Bachelor’s Degree (computer science, engineering, or math preferred). Locale / Travel: The Practice Lead position requires estimated 50% travel, most of which will be in the Continental US (a valid national Passport must be maintained).  This is a full time position and will be based in the Kirkland office. Preferred Education: Master’s Degree in Information Technology or Software Engineering; Premium Microsoft Certifications on .NET (MCSD) or MCPD or relevant experience; Microsoft Certified Trainer (MCT) or relevant experience. Minimum Experience and Skills: 7+ years experience with business information systems integration or custom business application design and development in a professional technology consulting, corporate MIS or software development environment. Essential Duties & Responsibilities: Provide training, consulting, and mentoring to organizations on topics that include Visual Studio Team System and ALM. Create content, including labs and demonstrations, to be delivered as training classes by Northwest Cadence employees. Lead development teams through the complete ALM and/or Visual Studio Team System solution. Be able to communicate in detail how a solution will integrate into the larger technical problem space for large, complex enterprises. Define technical solution requirements. Provide guidance to the customer and project team with respect to technical feasibility, complexity, and level of effort required to deliver a custom solution. Ensure that the solution is designed, developed and deployed in accordance with the agreed upon development work plan. Create and deliver weekly status reports of training and/or consulting progress. Engagement Responsibilities: · Provide a strong desire to provide thought leadership related to technology and to help grow the business. · Work effectively and professionally with employees at all levels of a customer’s organization. · Have strong verbal and written communication skills. · Have effective presentation, organizational and planning skills. · Have effective interpersonal skills and ability to work in a team environment. Enquire at [email protected]

    Read the article

  • Exalytics and Oracle Business Intelligence Enterprise Edition (OBIEE) Partner Workshop

    - by mseika
    Workshop Description Oracle Fusion Middleware 11g is the #1 application infrastructure foundation. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Oracle Exalytics Business Intelligence Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers unmatched speed, visualizations and scalability for Business Intelligence and Enterprise Performance Management applications. This FREE hands-on, partner workshop highlights both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle’s proven Business Intelligence Foundation with enhanced visualization capabilities and performance optimizations. This workshop will provide hands-on experience with Oracle's latest engineered system. Topics covered will include TimesTen In-Memory Database and the new Summary Advisor for Exalytics, the technical details (including mobile features) of the latest release of visualization enhancements for OBI-EE, and technical updates on Essbase. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution. You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics.If you are a BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Register Now! Presentations Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics Lab OutlineThe labs showcase Oracle Exalytics core components and functionality and provide expertise of Oracle Business Intelligence 11.1.1.6 new features and updates from prior releases. The hands-on activities are based on an Oracle VirtualBox image with software and training samples pre-installed. Lab Environment Setup Creating and Working with Oracle TimesTen In-Memory Database Running Summary Advisor Utility Working with Exalytics Visualization Features – Dashboard and Analysis Interactions Audience Oracle Partners BI and EPM Application Developers and Implementers System Integrators and Solution Consultants Data Warehouse Developers Enterprise Architects Prerequisites Experience and understanding of OBIEE 11g is required Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11gIntroduction Workshop is highly recommended Good understanding of data warehousing and data modeling for reporting and analysis purpose Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops for this class.Attendee laptops must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop. Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Schedule This workshop is 3 days. - Times vary by country!9:00am: Sign-in and technical setup 9:30am: Workshop starts 5:00pm: Workshop ends Oracle Exalytics and Business Intelligence (OBIEE) Workshop December 11-13, 2012: Oracle BVP, Birmingham, UK Register Here. Questions? Send email to: [email protected] Oracle Platform Technologies Enablement Services

    Read the article

  • Eclipse no longer useful

    - by dgood1
    When I got my Eclipse from the Ubuntu Software Center, it was good and worked fine. I could work on Java projects fine. This week I was required to add ADT and tried the ADT-bundle, assuming it had everthing I needed, seeing that the SDK had more steps. So now, I can create Android apps using the ADT-bundle. I tried to work on my java projects again and I now discovered: I can't run my java projects: "The selection cannot be launched. And there are no recent launches." error. I also believe Eclipse doesn't know it's a java program because it all in black and white. Not the usual green/blue/red/black things when making comments, variables and Strings. I can't make new projects of ANYTHING unless I use the adt-bundle. New project only offers CVS (whatever that is) My perspectives seem limited. I remembered more choices and now I'm limited to [Java], Resource, CVS Repository, debug, Team Sync. I was told to be able to use perspectives to swap between Android and Java developing. Even after the ADT installation using "Install new Software",nothing. I can't uninstall/purge/remove Eclipse via the terminal. I tried removing it then reinstalling it via the Ubuntu Software Cetner. No results other than it's temporary removal. (Possibly unrelated) A large number of repositories are not found when updating Eclipse. (See Step 8 in Summary of what I did...) Although, on checking the versions and installation history, I confirmed Android and Java are installed. It probably just doesn't know it's there. Eclipse Indigo: Version: 3.7.2 Build id: I20110613-1736 Summary of what I did before and during the problem: Downloaded adt-bundle. Attempted instructions from teacher. (Install new Software) (Failed but other than an annoying "can't find repository" during each update, no damage to report) (Fixed) Ran "eclipse" executable from the adt-bundle. Updated Eclipse. (After restart, I noticed the problem) NOTE: other than window arrangement, I had no customizations. Played around with the Windowspreferences and Projectpropertied. Restored to default settings after no results. Tried "apt-get purge eclipse". Couldn't find Eclipse so, nothing happened. Used Software center. No results. Tried swapping workspaces. I tried different folder, deeper folder, renaming. All return the same problem. Deleted adt-bundle (browsed folders then delete). Got Adt-sdk only. Installed. Can't find any changes other than some disk space usage. Of course, I can't make Android apps until I unzip the bundle again. WindowsPreferencesInstall/UpdateAvailable Software Sites, Checked as many repositories as possible, then updated. Still nothing. I'm about to get a second try on uninstalling it, because I think my last action will just be taking up space. But I'll wait for tomorrow, in case the answer will help. Any thoughts?

    Read the article

  • GoldenGate 12c - MySQL Active-Active Replication Setup

    - by Jinyu Wang-Oracle
    Active-active  (also called Master-Master or Bi-Directional) replication captures data changes from two or more systems and replicat the changes to synchronize the data.  Active-Active replication is often needed for high availability, load balancing and scaling out purposes.   Oracle GoldenGate is known to be one of the first and the best replication tool handling active-active replications. As of Oracle GoldenGate 12c, it provides (Refer to Oracle GoldenGate 12.1.2 Documentation - Configuring Oracle GoldenGate for Active-Active High Availability for more information) the followings: Robust loop-back prevention Comprehensive conflict resolution and detection support Heterogeneous support across different database versions and operation systems.  Oracle GoldenGate supports active-active configurations for DB2 on z/OS, LUW, and IBM i, MySQL, Oracle, SQL/MX,SQL Server, Sybase, and Teradata. However, the setup is different from database to database. In this example, I will show you how to setup an active-active data replication between two MySQL database instances. The example setup below is to have active-active replication between MySQL 5.5 and MySQL 5.6 instances and is shown as follows: MySQL 5.5 (Manager Port: 15105)  Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3305 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, PASSWORD mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl REPORTROLLOVER AT 05:30 ON saturday TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15106, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3305 targetdb test, userid root, password mysql sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge REPERROR (DEFAULT, ABEND) REPERROR(1062, IGNORE) map test.TCUSTMER, target test.TCUSTMER,colmap(usedefaults, region_code="region code"); map test.TCUSTORD, target test.TCUSTORD; MySQL 5.6 (Manager Port: 15106) Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3306 targetdb test, userid root, password mysql --assumetargetdefs sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge map test.TCUSTMER, target test.TCUSTMER, colmap(usedefaults, "region code"=region_code); map test.TCUSTORD, target test.TCUSTORD; Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3306 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, USERID mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/usr/local/mysql56/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15105, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; The setup parameters are quite self-explanatory. The key setup is to avoid the replication data  looping. Oracle GoldenGate for MySQL uses the information in the replication checkpoint table to identify the transaction applied by replicats and thus avoid extracting those transactions by Oracle GoldenGate extracts. The example setup in the extract in MySQL 5.5 instance is shown as follows.  TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl Setting up an active-active replication is often more complicated than this and requires the following additional considerations. I would elaborate on this in the follow-up discussions. 

    Read the article

  • Deliberate Practice

    - by Jeff Foster
    It’s easy to assume, as software engineers, that there is little need to “practice” writing code. After all, we write code all day long! Just by writing a little each day, we’re constantly learning and getting better, right? Unfortunately, that’s just not true. Of course, developers do improve with experience. Each time we encounter a problem we’re more likely to avoid it next time. If we’re in a team that deploys software early and often, we hone and improve the deployment process each time we practice it. However, not all practice makes perfect. To develop true expertise requires a particular type of practice, deliberate practice, the only goal of which is to make us better programmers. Everyday software development has other constraints and goals, not least the pressure to deliver. We rarely get the chance in the course of a “sprint” to experiment with potential solutions that are outside our current comfort zone. However, if we believe that software is a craft then it’s our duty to strive continuously to raise the standard of software development. This requires specific and sustained efforts to get better at something we currently can’t do well (from Harvard Business Review July/August 2007). One interesting way to introduce deliberate practice, in a sustainable way, is the code kata. The term kata derives from martial arts and refers to a set of movements practiced either solo or in pairs. One of the better-known examples is the Bowling Game kata by Bob Martin, the goal of which is simply to write some code to do the scoring for 10-pin bowling. It sounds too easy, right? What could we possibly learn from such a simple example? Trust me, though, that it’s not as simple as five minutes of typing and a solution. Of course, we can reach a solution in a short time, but the important thing about code katas is that we explore each technique fully and in a controlled way. We tackle the same problem multiple times, using different techniques and making different decisions, understanding the ramifications of each one, and exploring edge cases. The short feedback loop optimizes opportunities to learn. Another good example is Conway’s Game of Life. It’s a simple problem to solve, but try solving it in a functional style. If you’re used to mutability, solving the problem without mutating state will push you outside of your comfort zone. Similarly, if you try to solve it with the focus of “tell-don’t-ask“, how will the responsibilities of each object change? As software engineers, we don’t get enough opportunities to explore new ideas. In the middle of a development cycle, we can’t suddenly start experimenting on the team’s code base. Code katas offer an opportunity to explore new techniques in a safe environment. If you’re still skeptical, my challenge to you is simply to try it out. Convince a willing colleague to pair with you and work through a kata or two. It only takes an hour and I’m willing to bet you learn a few new things each time. The next step is to make it a sustainable team practice. Start with an hour every Friday afternoon (after all who wants to commit code to production just before they leave for the weekend?) for month and see how that works out. Finally, consider signing up for the Global Day of Code Retreat. It’s like a daylong code kata, it’s on December 8th and there’s probably an event in your area!

    Read the article

  • Planning development when academic research is involved

    - by Another Anonymous User
    Dear fellow programmers, how do you do "software planning" when academic research is involved? And, on a side note, how do you convince your boss that writing software is not like building a house and it's more like writing a novel? The gory details are below. I am in charge of a small dev team working in a research lab. We started developing a software with the purpose of going public one day (i.e. sell and make money off that). Such software depends on, amongst other things, at least two independent research lines: that is, there are at least two Ph.D. candidates that will, hopefully, one day come out with a working implementation of what we need. The main software depends also on other, more concrete resources that we as developers can take care of: graphics rendering, soft bodies deformation, etc. My boss asked me to write the specifications, requirements AND a bloody GANTT chart of the entire project. Faced with the fact that I don't have a clue about the research part, and that such research is fundamental for the software, he said "make assumptions." For the clarity of the argument, he is a professor whose Ph.D. students should come up with the research we need. And he comes from a strictly engineering background: plan everything first, write down specifications and only then write down code that "it's the last part". What I am doing now: I broke down the product in features; each 'feature' is, de facto, a separate product; Each feature is built on top of the previous one; Once a feature (A) has a working prototype the team can start working on the next feature (B), while QA for is being done for A (if money allows, more people can be brought in, etc.); Features that depend on research will come last: by then, hopefully, the research part will be completed (when is still a big question) ; Also, I set the team to use SCRUM for the development of 'version 1.0', due in a few months. This deadline could be set based on reasonable assumptions: we listed all required features, we counted our availability, and we gave a reasonable estimate. So my questions, again, are: How do I make my boss happy while at the same time get something out the door? How do I write specifications for something we -the developers- have no clue whether it's possible to do or not? (We still haven't decided which libraries to use for some tasks; we'll do so when we'll need to) How do I get the requirements for that, given that there are yet no clients nor investors, just lots of interests and promises? How do I get peace in the world? I am sure at least one of my questions will be answered :) ps: I am writing this anonymously since a potential investor might backfire if this is discovered. Hope you'll understand. However I must say I do not like this mentality of 'hiding the truth': this program will likely benefit many, and not being able to talk openly about this (with my name and my reputation attached) feels like censorship. But alas, I care more about your suggestions now.

    Read the article

  • Suppliers for revision-controlled hardware, long-life motherboards?

    - by jacobsee
    Has anyone had good experience with suppliers of industrial computers, specifically 'long-life, revision-controlled' motherboards? I'm found a couple of likely candidates including ITOX, BCM, and DuroPC but haven't been able to find much in the way of independent review. I'm currently using off the shelf motherboards for an industrial data-acquisition system and am trying to eliminate the problem of rapid turnover/obsolescence of motherboards.

    Read the article

  • jQuery, jQuery UI, and Dual Licensed Plugins (Dual Licensing)

    - by John Hartsock
    OK I have read many posts regarding Dual Licensing using MIT and GPL licenses. But Im curious still, as the wording seems to be inclusive. Many of the Dual Licenses state that the software is licensed using "MIT AND GPL". The "AND" is what confuses me. It seems to me that the word "AND" in the terms, means you will be licensing the product using both licenses. Most of the posts, here on stackoverflow, state that you can license the software using one "OR" the other. JQuery specifically states "OR", whereas JQuery UI specifically States "AND". Another Instance of the "AND" would be JQGrid. Im not a lawyer but, it seems to me that a legal interpretation of this would state that use of the software would mean that your using the software under both licenses. Has anyone who has contacted a lawyer gotten clarification or a definitive answer as to what is true? Can you use Dual licensed software products that state "AND" in the terms of agreement under either license? EDITED: Guys here is specifically what Im talking about on jquery.org/license you see the following stated: You may use any jQuery project under the terms of either the MIT License or the GNU General Public License (GPL) Version 2 but in the header of Jquery's and Jquery UI library you see this: * Dual licensed under the MIT and GPL licenses. * http://docs.jquery.com/License The site says MIT or GPL but the license statement in the software says MIT and GPL.

    Read the article

  • Navigating through a sea of hype

    - by wouldLikeACrystalBall
    This is a vague, open question, so if you have no interest in these, please leave now. A few years ago it seemed everyone thought the death of desktop software was imminent. Web applications were the future. Everyone would move to cloud-based software-as-a-service systems, and developing applications for specific end-user platforms like Windows would soon become something of a ghetto. Joel's "How Microsoft Lost the API War" was but one of many such pieces sounding the death knell for this way of software development. Flash-forward to 2010, and the hype is all around mobile devices, particularly the iPhone. Software-as-a-Service vendors--even small ones such as YCombinator startups--go out of their way to build custom applications for the iPhone and other smart phone devices; applications that can be quite sophisticated, that run only on specific hardware and software architectures and are thus inherently incompatible. Now some of you are probably thinking, "Well, only the decline of desktop software was predicted; mobile devices aren't desktops." But the term was used by those predicting its demise to mean laptops also, and really any platform capable of running a browser. What was promised was a world where HTML and related standards would supplant native applications and their inherent difficulties. We would all code to the browser, not the OS. But here we are in 2010 with the AppStore bulging and development for the iPad just revving up. A few days ago, I saw someone on Hacker News claim that the future of computing was entirely in small, portable devices. Apparently the future is underpowered, requires dexterous thumbs and induces near-sightedness. How do those who so vehemently asserted one thing now assert the opposite with equal vehemence, without making even the slightest admission of error? And further, how are we as developers supposed to sift through all of this? I bought into the whole web-standards utopianism that was in vogue back in '06-'07 and now feel like it was a mistake. Is there some formula one can apply rather than a mere appeal to experience?

    Read the article

  • I'd like to rebuild my web server without web management software; what knowledge, skills, and tools will I require? [closed]

    - by Joe Zeng
    I've been using Webmin for my web server that runs my personal website and a host of other websites for a while now, and I feel like I should be able to manage my web server more directly, because I haven't even touched the Webmin for the past year or so and I feel like maybe it has too much functionality that I have to click through the next time I want to access it or create a new subdomain or database on my site. I want to try something lighter and more wholly manageable, now that I'm more comfortable with using ssh and command-line tools. I've decided that I'm going to try using Django as a framework, but obviously that's only part of the picture. What sort of knowledge will I require?

    Read the article

  • RegistryKey ValueCount/SubKeyCount wrong

    - by Mark J Miller
    I am trying to query the following registry key values: HKLM\SOFTWARE\Microsoft\MSSQLServer\Client\SharedMemoryOn HKLM\SOFTWARE\Microsoft\MSSQLServer\Client\SuperSocketNetLib\ProtocolOrder But depending on which machine I'm running the program the query returns null. When I debug on my local machine and I inspect the value for ValueCount for: HKLM\SOFTWARE\Microsoft\MSSQLServer\Client HKLM\SOFTWARE\Microsoft\MSSQLServer\Client\SuperSocketNetLib The count is 0 and OpenSubKey returns null. I am a domain admin, in the local administrators group and have added the following to my app.manifest: <requestedExecutionLevel level="requireAdministrator" uiAccess="false" /> Any idea why? private static void ValidateSqlClientSettings() { Console.WriteLine("\r\n/////////////// LOCAL SQL CLIENT PROTOCOLS ////////////////"); RegistryKey keyHKLM = Registry.LocalMachine; ///TODO: nullreferenceexception - connect to remote machine and find out why RegistryKey sqlClientKey = keyHKLM.OpenSubKey(@"SOFTWARE\Microsoft\MSSQLServer\Client"); if (sqlClientKey == null) { WriteLine2Console(@"WARNING: unable to read registry key '{0}\SOFTWARE\Microsoft\MSSQLServer\Client'", ConsoleColor.Yellow); } var cliKeyNames = from k in sqlClientKey.GetSubKeyNames() where k == "SuperSocketNetLib" select k; ///TODO: find out why these values are always missing (even if I can see them in regedit) Console.Write("Shared Memory Disabled (cliconfg): "); if (Convert.ToBoolean(sqlClientKey.GetValue("SharedMemoryOn"))) WriteLine2Console("FAILED", ConsoleColor.Red); else if(sqlClientKey.GetValue("SharedMemoryOn") == null) WriteLine2Console(String.Format("WARNING - unable to read '{0}\\SharedMemoryOn'", sqlClientKey.Name), ConsoleColor.Yellow); else WriteLine2Console("PASS", ConsoleColor.Green); Console.Write("Client Protocol Order (cliconfg - tcp first): "); foreach (string cliKey in cliKeyNames) { RegistryKey subKey = sqlClientKey.OpenSubKey(cliKey); object order = subKey.GetValue("ProtocolOrder"); if (order != null && order.ToString().StartsWith("tcp") == false) { WriteLine2Console("FAILED", ConsoleColor.Red); } else if (order == null) { WriteLine2Console(String.Format("WARNING - unable to read '{0}\\ProtocolOrder'", subKey.Name), ConsoleColor.Yellow); } else { WriteLine2Console("PASS", ConsoleColor.Green); } subKey.Close(); } sqlClientKey.Close(); keyHKLM.Close(); }

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >