Search Results

Search found 2040 results on 82 pages for 'platforms'.

Page 58/82 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • ArchBeat Link-o-Rama for 2012-06-28

    - by Bob Rhubart
    Oracle Magazine Technologist of the Year Awards to honor architects at #OOW12 Seven of the ten categories in this year's Oracle Magazine Technologist of the Year Awards are designated to celebrate architects. The winners will be honored at Oracle OpenWorld -- and showered with adulation from their colleagues. Nominations for these awards close on Tuesday July 17, so make sure you submit your nominations right away. Oracle E-Business Suite 12 Certified on Additional Linux Platforms (Oracle E-Business Suite Technology) Oracle E-Business Suite Release 12 (12.1.1 and higher) is now certified on the following additional Linux x86/x86-64 operating systems: Oracle Linux 6 (32-bit), Red Hat Enterprise Linux 6 (32-bit), Red Hat Enterprise Linux 6 (64-bit), and Novell SUSE Linux Enterprise Server (SLES) version 11 (64-bit). FairScheduling Conventions in Hadoop (The Data Warehouse Insider)"If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go," says Dan McClary. Learn how in his technical post. SOA Learning Library (SOA & BPM Partner Community Blog) The Oracle Learning Library offers a vast collection of e-learning resources covering a mind-boggling array of products and topics. And it's all free—if you have an Oracle.com membership. And if you don't, that's free, too. Could this be any easier? Oracle Fusion Middleware Security: LibOVD: when and how | Andre Correa Fusion Middleware A-Team blogger Andre Correa offers some background on LibOVD and shares technical tips for its use. Virtual Developer Day: Oracle Fusion Development Yes, it's called "Developer Day," but there's plenty for architects, too. This free event includes hands-on labs, live Q&A with product experts, and a dizzying amount of technical information about Oracle ADF and Fusion Development -- all without having to pack a bag or worry about getting stuck in a seat between two professional wrestlers. Tuesday, July 10, 2012 9:00 a.m. PT – 1:00 p.m. PT 11:00 a.m. CT – 3:00 p.m. CT 12:00 p.m. ET – 4:00 p.m. ET 1:00 p.m. BRT – 5:00 p.m. BRT Thought for the Day "Computers allow you to make more mistakes faster than any other invention in human history with the possible exception of handguns and tequila." — Mitch Ratcliffe Source: SoftwareQuotes.com

    Read the article

  • Configuring Full-Text Search for pdf and docx files

    - by Lukasz Kurylo
    I think in may I was creating a little filters module based on Full Text-Search. I have configured my dev machine, the same for two testing servers – in our company for internal testing before we deployed it to client, and then on the testing client server. Until last week this build  was still on the testing server and finally we got feedback that we can deploy it on the production one. I only say that, I lost half a day because I had not correctly remembered what I was doing to configure the FTS on the previous servers and I had no notes for that. I foolishly believed in my memory. Lesson learned.   For future reference a bunch of steps to configure the FTS for searching in *.pdf and *.docx files (and by the way in other Office files like *.xlsx).   1. From the page (link) download and install the *.pdf IFilter for FTS. 2. To the PATH global system variable add path to the catalog, where you installed the plugin. Default for this version is: C:\Program Files\Adobe\Adobe PDF iFilter 9 for 64-bit platforms\bin 3. From the page (link) download a FilterPackx64.exe and install it. 4. Now from SSMS execute the following procedures: -sp_fulltext_service 'load_os_resources',1 -sp_fulltext_service 'verify_signature', 0 5. Restart the server 6. Now we must check if the plugins are visible: -select document_type, path from sys.fulltext_document_types where document_type = '.pdf' -select document_type, path from sys.fulltext_document_types where document_type = '.docx' 7. If we see a result, then we can assume that everything is ok*. 8. Right now we can create a catalog for FTS and indexes on appropriate columns.     *I lost a lot of hours to find out, why the plugin for the *.pdf files wasn’t indexed any file in the database, but in the sys.fulltext_document_types table there was available a line for this plugin. After the deeper investigation I found that the *.pdf files actually were indexed. At least the EOF sign was added to the indexes and nothing more for each file. In the end the problem was that, I forgot to add the /bin in the path to the plugin in PATH variable..

    Read the article

  • Mobile Apps: An Ongoing Revolution

    - by Steve Walker
    a guest post from Suhas Uliyar, VP Mobile Strategy, Product Management, Oracle The rise of smartphone apps have proved transformational for businesses, increasing the productivity of employees while simultaneously creating some seriously cool end user experiences. But this is a revolution that is only just beginning. Over the next few years, apps will change everything about the way enterprises work as well as overhauling the experiences of customers. The spark for this revolution is simplicity. Simplicity has already proved important for the front-end of apps, which are now often as compelling and intuitive as consumer apps. Businesses will encourage this trend, both to further increase employee productivity and to attract ‘digital natives’ (as employees and customers). With the variety of front-end development tools available already, this should be a simple mission for developers to accomplish – but front-end simplicity alone is not enough for the enterprise mobile revolution. Without the right content even the most user-friendly app is useless. Yet when it comes to integrating apps with ‘back-end’ systems to enable this content, developers often face a complex, costly and time-consuming task. Then there is security: how can developers strike a balance between complying with enterprise security policies and keeping the user experience simple? Complexity has acted as a brake on innovation, with integration and security compliance swallowing enterprise resources. This is why the simplification of integration, security and scalability is so important: it frees time and money for revolutionary innovation. The key is to put in place a complete and unified SOA integration platform that runs across the entire enterprise and enables organizations to easily integrate and connect applications across IT environments. The platform must also be capable of abstracting apps from the underlying OS and enabling a ‘write-once, run- anywhere’ capability for mobile devices - essential for BYOD environments and integrating third-party apps. Mobile Back-end-as-a-Service can also be very important in streamlining back-end integration. Mobile services offered through the cloud can simplify mobile application development with a standard approach to dealing with complex server-side programming and integration issues. This allows the business to innovate at its own pace while providing developers with a choice of tools to speed development and integration. Finally, there is security, which must be done in a way that encourages users to make the most of their mobile devices and applications. As mobile users, we want convenience and that is why we generally approve of businesses that adopt BYOD policies. Enterprises can safely encourage BYOD as they can separate, protect, and wipe corporate applications by installing a secure ‘container’ around corporate applications on any mobile device. BYOD management also means users’ personal applications and data can be kept separate from the enterprise information – giving them the confidence they need to embrace the use of their devices for corporate apps. Enterprises that place mobility at the heart of what they do will fundamentally transform their businesses and leap ahead of the competition. As businesses take to mobile platforms that simplify integration, security and scalability we will see a blossoming of innovation that will drive new levels of user convenience and create new ways of working that we are only beginning to imagine.

    Read the article

  • Cloud Computing: Start with the problem

    - by BuckWoody
    At one point in my life I would build my own computing system for home use. I wanted a particular video card, a certain set of drives, and a lot of memory. Not only could I not find those things in a vendor’s pre-built computer, but those were more expensive – by a lot. As time moved on and the computing industry matured, I actually find that I can buy a vendor’s system as cheaply – and in some cases far more cheaply – than I can build it myself.   This paradigm holds true for almost any product, even clothing and furniture. And it’s also held true for software… Mostly. If you need an office productivity package, you simply buy one or use open-sourced software for that. There’s really no need to write your own Word Processor – it’s kind of been done a thousand times over. Even if you need a full system for customer relationship management or other needs, you simply buy one. But there is no “cloud solution in a box”.  Sure, if you’re after “Software as a Service” – type solutions, like being able to process video (Windows Azure Media Services) or running a Pig or Hive job in Hadoop (Hadoop on Windows Azure) you can simply use one of those, or if you just want to deploy a Virtual Machine (Windows Azure Virtual Machines) you can get that, but if you’re looking for a solution to a problem your organization has, you may need to mix Software, Infrastructure, and perhaps even Platforms (such as Windows Azure Computing) to solve the issue. It’s all about starting from the problem-end first. We’ve become so accustomed to looking for a box of software that will solve the problem, that we often start with the solution and try to fit it to the problem, rather than the other way around.  When I talk with my fellow architects at other companies, one of the hardest things to get them to do is to ignore the technology for a moment and describe what the issues are. It’s interesting to monitor the conversation and watch how many times we deviate from the problem into the solution. So, in your work today, try a little experiment: watch how many times you go after a problem by starting with the solution. Tomorrow, make a conscious effort to reverse that. You might be surprised at the results.

    Read the article

  • Running C++ AMP kernels on the CPU

    - by Daniel Moth
    One of the FAQs we receive is whether C++ AMP can be used to target the CPU. For targeting multi-core we have a technology we released with VS2010 called PPL, which has had enhancements for VS 11 – that is what you should be using! FYI, it also has a Linux implementation via Intel's TBB which conforms to the same interface. When you choose to use C++ AMP, you choose to take advantage of massively parallel hardware, through accelerators like the GPU. Having said that, you can always use the accelerator class to check if you are running on a system where the is no hardware with a DirectX 11 driver, and decide what alternative code path you wish to follow.  In fact, if you do nothing in code, if the runtime does not find DX11 hardware to run your code on, it will choose the WARP accelerator which will run your code on the CPU, taking advantage of multi-core and SSE2 (depending on the CPU capabilities WARP also uses SSE3 and SSE 4.1 – it does not currently use AVX and on such systems you hopefully have a DX 11 GPU anyway). A few things to know about WARP It is our fallback CPU solution, not intended as a primary target of C++ AMP. WARP stands for Windows Advanced Rasterization Platform and you can read old info on this MSDN page on WARP. What is new in Windows 8 Developer Preview is that WARP now supports DirectCompute, which is what C++ AMP builds on. It is not currently clear if we will have a CPU fallback solution for non-Windows 8 platforms when we ship. When you create a WARP accelerator, its is_emulated property returns true. WARP does not currently support double precision.   BTW, when we refer to WARP, we refer to this accelerator described above. If we use lower case "warp", that refers to a bunch of threads that run concurrently in lock step and share the same instruction. In the VS 11 Developer Preview, the size of warp in our Ref emulator is 4 – Ref is another emulator that runs on the CPU, but it is extremely slow not intended for production, just for debugging. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • What should filenames and URLs of images contain for SEO benefit?

    - by Baumr
    We know that good site architecture usually looks like this: example-company.com/ example-company.com/about/ example-company.com/contact/ example-company.com/products/ example-company.com/products/category/ example-company.com/products/category/productname/ Now, when it comes to Google Image search, it is clear that the img alt tag, filename/URL, and surrounding text (captions, headings, paragraphs) have an effect on ranking. I want to ask about the filename of the images that we should use (e.g. product-photo.jpg). ...but first about the URL: Often web developers stick all images in a single folder in the root: example-company.com/img/ — and I have stopped doing that. (I don't want to get into it, but basically, it seems more semantic for images which make up part of the content at each sub-directory) However, when all images appear in a folder, I feel that their filename needs to reflect what they are a bit more than usual, for example: example-company.com/img/example-company-productname-category.jpg It's a longer filename than just product.png, but as long as it's relevant, I see no problem with regards to SEO (unless you're keyword stuffing), and it could even help rank for keywords: "example company" "productname" "category" So no questions there. But what about when we have places images in the site architecture we outlined at the beginning? In other words, what if image URL paths look like this: example-company.com/products/category/productname/productname.jpg My question is, should the URL be kept short like above and only have the "productname" (and some descriptive keywords) as part of it's filename? Or, should it also include the "example-company" and "category"? Like so: example-company.com/products/category/productname/example-company-category-productname.jpg That seems much longer, and redundant when we look at the URL, but here are a few considerations. Images are often downloaded onto computers, and, to the average user, they lose their original URL and thus — it isn't clear where they came from. Also, some social networks, forums, and other platforms leave the filename intact when uploaded. (Many others rewrite it, for example, Pinterest and Facebook.) Another consideration, will this really help (even if ever so slightly) rank in Google Image Search, or at least inform Google that the product is something specific to the "example-company"? For example, what if this product can only be bought at this store and is the flagship product? In addition to an abundance of internal links to this product page, would having the "example company" name and "category" help it appear in "example company" searches? In other words, is less more?

    Read the article

  • Is there a pedagogical game engine?

    - by K.G.
    I'm looking for a book, website, or other resource that gives modern 3D game engines the same treatment as Operating Systems: Design and Implementation gave operating systems. I have read Jason Gregory's Game Engine Architecture, which I enjoyed. However, by intent the author treated components of the architecture as atomic units, whereas what I'm interested in is the plumbing between those units that makes a coherent whole out of ideally loosely coupled parts. In books such as these, one usually reads that "that's academic," but that's the point! I have also read Julian Gold's Object-oriented Game Development, which likewise was good, but I feel is beginning to show its age. Since even mobile platforms these days are multicore and have fast video memory, those kinds of things (concurrency, display item buffering) would ideally be covered. There are other resources, such as the Doom 3 source code, which is highly instructive for its being a shipped product. The problem with those is as follows: float Q_rsqrt( float number ) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; // evil floating point bit level hacking i = 0x5f3759df - ( i >> 1 ); // what the f***? y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; } To wit, while brilliant, this kind of source requires more enlightenment than I can usually muster upon first read. In summary, here's my white whale: For an adult reader with experience in programming. I wish I could save all the trees killed by every. Single. Game Programming book ever devoting the first two chapters to "Now just what is a variable anyway?" In C or C++, very preferably C++. Languages that are more concise are fantastic for teaching, except for when what you want to learn is how to cope with a verbose language. There is also the benefit of the guardrails that C++ doesn't provide, such as garbage collection. Platform agnostic. I'm sincerely afraid that this book is out there and it's Visual C++/DirectX oriented. I'm a Linux guy, and I'd do what it takes, but I would very much like to be able to use OpenGL. Thanks for everything! Before anyone gets on my case about it, Fast inverse square root was from Quake III Arena, not Doom 3!

    Read the article

  • TechEd 2012: Windows 8 And Metro

    - by Tim Murphy
    Windows 8 is here (or at least very close) and that was the main feature of this morning’s key note.  Antoine LeBlond started off by apologizing to the IT professionals since he planned on showing code.  I’m not sure if IT Pros are that easily confused or why you would need such a disclaimer.  Developers do real work, IT Pros just play with toys (just kidding). The highlights of the Windows 8 keynote for me started with some of the UI design elements that I had not seen when I was shown one of the Build tablets.  Specifically I liked the AppBar features that we have become used to with Windows Phone and some of the gesture features.  Even though they have been available on other platforms before I think Microsoft really got them right. Two other great features of Windows 8 that they demonstrated were the Hyper-V capabilities and the ability to run Windows 8 anywhere from a USB key.  My jaw dropped through the floor seeing a feature rich OS boot off of a thumb drive. WOW!  I also can’t wait to get rid of dual booting just to run Hyper-V images when developing. The morning continued with a session on Metro XAML development with Tim Heuer.  While included a lot of great XAML Metro demos, I was pleasantly surprised by some of the things I found out about Visual Studio 2012.  Finding out that Blend is now integrated with VS2012 was a nice addition after working with them as separate applications was an encouraging start. Moving on to Metro he introduced the nugget that WinRT is Async everywhere.  How deep this model goes will be an interesting thing to find out as I learn more about developing for the platform.  Thankfully he followed that up with a couple of new keywords, await and async, that eliminates a lot of plumbing that has been required in the past for asynchronous transactions. Tim also related that since the Metro framework is relatively small and most apps will use a significant amount of it the entire surface is referenced by default.  This is a contrast to adding namespace and assemblies one after another as we normally do. This was such a power packed session that I can’t detail it all here so here is the teaser list. New icons in VS2012 for extension methods Emulator/simulator testing features for gestures Portable class libraries XAML no longer managed code And so much more …   del.icio.us Tags: Windows 8,Metro,Tim Heuer,XAML,Widows Phone,Hyper-V,Antoine LeBlond,TechEd,TechEd 2012,Visual Studio 2012,Visual Studio

    Read the article

  • Is Cygwin or Windows Command Prompt preferable for getting a consistent terminal experience for development?

    - by Paul Hazen
    The question: Which is better, installing cygwin or one of its cousins on all my windows machines to have a consistent terminal experience across all my development machines, or becoming well trained in the skill of mentally switching from linux terminal to windows command prompt? Systems I use: OSX Lion on a Macbook Air Windows 8 on a desktop Windows 7 on the same desktop Fedora 16 on the same desktop What I'm trying to accomplish Configure an entirely consistent (or consistent enough) terminal experience across all my machines. "enough" in this context is clearly subjective. Please be clear in your answer why the configuration you suggest is consistent enough. One more thing to keep in mind: While I do write a lot of code intended to run on Windows (actually code that runs on Windows Phone which necessitates a windows machine), I also write a lot of Java code, and prefer to do so in vim. I test a local repo in Java on my windows machine, and push to another test machine running ubuntu later in the development stage. When I push to the ubuntu machine, I'm exclusively in terminal, since I'm accessing it via SSH. Summary, with more accurate question: Is there a good way to accomplish what I'm trying to do, or is it better to get accustomed to remembering different commands based on the system I'm on? Which (if either) is considered "best practice" by the development community? Alternatively, for a consistent development experience, would it be better to write all my code SSHed into another machine, and move things to windows for compile / build only when I needed to? That seems like too much work... but could be a solution. Update: While there are insightful responses below, I have yet to hear an answer that talks about why any given solution is superior. Cygwin/GnuWin32 is certainly a way to accomplish a similar experience on all platforms, but since I'm just learning all things command line, I don't want to set myself up to do a lot of relearning/unlearning in the future. Cygwin/GnuWin32 has its peculiarities I would imagine, and being aware of how that set up works on Windows is a learning curve. Additionally, using Cygwin/GnuWin32 robs me of learning the benefits of PowerShell. As a newcomer to working in a command line, which path should I choose to minimize having to relearn/unlearn things in the future? or as my first paragraph poses: [is it better to use Cygwin] ...or [become] well trained in the skill of mentally switching from linux terminal to windows command prompt?

    Read the article

  • 4.8M wasn't enough so we went for 5.055M tpmc with Unbreakable Enterprise Kernel r2 :-)

    - by wcoekaer
    We released a new set of benchmarks today. One is an updated tpc-c from a few months ago where we had just over 4.8M tpmc at $0.98 and we just updated it to go to 5.05M and $0.89. The other one is related to Java Middleware performance. You can find the press release here. Now, I don't want to talk about the actual relevance of the benchmark numbers, as I am not in the benchmark team. I want to talk about why these numbers and these efforts, unrelated to what they mean to your workload, matter to customers. The actual benchmark effort is a very big, long, expensive undertaking where many groups work together as a big virtual team. Having the virtual team be within a single company of course helps tremendously... We already start with a very big server setup with tons of storage, many disks, lots of ram, lots of cpu's, cores, threads, large database setups. Getting the whole setup going to start tuning, by itself, is no easy task, but then the real fun starts with tuning the system for optimal performance -and- stability. A benchmark is not just revving an engine at high rpm, it's actually hitting the circuit. The tests require long runs, require surviving availability tests, such as surviving crashes -and- recovery under load. In the TPC-C example, the x4800 system had 4TB ram, 160 threads (8 sockets, hyperthreaded, 10 cores/socket), tons of storage attached, tons of luns visible to the OS. flash storage, non flash storage... many things at high scale that all have to be perfectly synchronized. During this process, we find bugs, we fix bugs, we find performance issues, we fix performance issues, we find interesting potential features to investigate for the future, we start new development projects for future releases and all this goes back into the products. As more and more customers, for Oracle Linux, are running larger and larger, faster and faster, more mission critical, higher available databases..., these things are just absolutely critical. Unrelated to what anyone's specific opinion is about tpc-c or tpc-h or specjenterprise etc, there is a ton of effort that the customer benefits from. All this work makes Oracle Linux and/or Oracle Solaris better platforms. Whether it's faster, more stable, more scalable, more resilient. It helps. Another point that I always like to re-iterate around UEK and UEK2 : we have our kernel source git repository online. Complete changelog of the mainline kernel, and our changes, easy to pull, easy to dissect, easy to know what went in when, why and where. No need to go log into a website and manually click through pages to hopefully discover changes or patches. No need to untar 2 tar balls and run a diff.

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • 3 Reasons You Need To Know Something About Every Technology

    - by Tim Murphy
    I make my living as a consultant and a general technologist.  I credit my success to the fact that I have never been afraid to pick up any product, language or platform needed to get the job done.  While Microsoft technologies I my mainstay, I have done work on mainframe and UNIX platforms and have worked with a wide variety of database engines.  Each one has it’s use and most times it is less expensive to find a way to communicate with an existing system than to replace it. So what are the main benefits of expending the effort to learn a new technology? New ways to solve problems Accelerate development Advise clients and get new business opportunities By new technology I mean ones that you haven’t had experience with before.  They don’t have to be the the one that just came out yesterday.  As they say, those who do not learn from history are bound to repeat it.  If you can learn something from an older technology it can be just as valuable as the shiny new one.  Either way, when you add another tool to your kit you get a new view on each problem you face.  This makes it easier to create a sound solution. The next thing you can learn from working with different products and techniques is how to more efficiently develop solve problems.  Many times if you are working with a new language you will find that there are specific design patterns that are used with it in normal use.  These can usually be applied with most languages.  You just needed to be exposed to them. The last point is about helping your clients and helping yourself.  If you can get in on technologies early you will have advantage over your competition in the market.  You will also be able to honestly advise you client on why they should or should not go with a new product.  Being able to compare products and their features is always an ability that stake holders appreciate. You don’t need to learn every detail of a product.  Learn enough to function and get an idea of how to use the technology.  Keep eating those technology Wheaties and you will be ready to go the distance in any project. del.icio.us Tags: Technology,technologists,technology generalist,Software Architecture

    Read the article

  • Manic Monday - More OpenWorld Solaris Sessions: Developers, Cloud, Customer Insights, Hardware Optimization

    - by Larry Wake
    We're overflowing with Monday sessions; literally more than one person can take in. Learn more about what's new in Oracle Solaris Studio, hear about the latest x86 and SPARC hardware optimizations, get some insights on cloud deployment strategies, and find out from your peers what they're doing with Oracle Solaris. If you're an OpenWorld attendee, go to to Schedule Builder to guarantee your space in any session or lab. See yesterday's blog post and the "Focus on Oracle Solaris" guide for even more sessions. Monday, October 1st: 10:45 AM - Maximizing Your SPARC T4 Oracle Solaris Application Performance(CON6382,  Marriott Marquis - Golden Gate C3) Hear how customers and commercial software partners have reached peak performance on SPARC T4 servers and engineered systems with Oracle Solaris Studio and its latest tools for analyzing, reporting, and improving runtime performance: Autoparallelizing, high-performance compilers Performance Analyzer (used to find performance hotspots) Thread Analyzer (to expose data races and deadlocks) Code Analyzer (used to discover latent memory corruption issues) 10:45 Cloud Formation: Implementing IaaS in Practice with Oracle Solaris(CON8787, Moscone South 302) Decisions, decisions--at the same time, we've got a session that covers why Oracle Solaris is the ideal OS for public or private clouds, IaaS or PaaS, with built-in features for elastic infrastructure, unrivaled security, superfast installation and deployment, nonstop availability, and crystal-clear observability. This session will include a customer study on how Oracle Solaris is used in the cloud today to implement the Oracle stack. 12:15 PM - Customer Insight: Oracle Solaris on Oracle Exadata, Oracle Exalogic, and SPARC SuperCluster(CON8760, Moscone South 270) Hear from customers what benefits they have realized from using the Oracle stack on Oracle Exadata and Oracle’s SPARC SuperCluster and from using Oracle Solaris on those engineered systems, taking advantage of built-in lightweight OS virtualization (Zones), enterprise reliability and scale, and other key features. 1:45 PM - Case Study: Mobile Tornado Uses Oracle Technology for Better RAS and TCO?(CON4281, Moscone West 2005) Mobile Tornado develops and markets instant communication platforms, replacing traditional radio networks with cellular networks. Its critical concern is uptime. Find out how they've used Oracle Solaris, Netra SPARC T4, and Oracle Solaris Cluster, including Oracle Solaris ZFS and Zones, for their Oracle Database deployments to improve reliability and drive down cost. 3:15 PM - Technical Panel: Developing High Performance Applications on Oracle Solaris(CON7196, Marriott Marquis - Golden Gate C2) Engineers from the Oracle Solaris, Oracle Database, and Oracle Tuxedo development teams, and Oracle ISV Engineering discuss how they develop high-performance enterprise applications that take advantage of Oracle's SPARC and x86 servers, with Oracle Solaris Studio and new Oracle Solaris 11 features. Topics will include developer tools, parallel frameworks, best practices, and methodologies, as well as insights and case studies on parallelizing and optimizing application performance on Oracle Solaris. Bring your best questions! 3:15 PM -  x86 Power Management with Oracle Solaris: Current State, Opportunities, and Future(CON6271, Moscone West 2012) Another option for this time slot: learn about how Intel Xeon and Oracle Solaris work together to reduce server power consumption. This presentation addresses some of the recent power management improvements in Oracle Solaris, opportunities to further improve energy efficiency, and some future directions for Oracle Solaris power management.

    Read the article

  • Saddling your mountain lion with JDeveloper

    - by Blueberry Coder
    Last October, Apple released Java Update 2012-006. This patch brought the Apple-provided JDK for OS X Lion v10.7 and OS X Mountain Lion v10.8 to version 1.6.0_37. At the same time, it disabled the Apple Java plugins and removed the Java Preferences panel that enabled users to manage the various Java releases on their computer. On the Windows and Linux platforms, JDeveloper 11g R1 has been certified  to run on Java 7 since patch set 5. This is not the case on OS X.   ( The above is not a typo. Apple's OS for personal computer is now known as OS X; the « Mac » prefix has been dropped with the 10.8 release. And it's pronounced « Oh-Ess-Ten », by the way. Yes, I am a nitpicker. I know... ) Please note JDeveloper 11g R2 is not certified either. On any platform. It will generally work, but there are known issues with ADF Mobile. Personally, I would recommend to wait for 12c before going to JDK 7.  Now, suppose you have installed Oracle's JDK 7 on your Mac. JDeveloper will not run on it. It will even not install. Susan and I discovered this the hard way while setting up the ADF Mobile hands-on lab we ran at the UKOUG 2012 conference. The lab was a great success nevertheless, attracting nearly a hundred delegates. It was great to see the interest ADF Mobile already generates, especially among PL/SQL Developers and DBAs. But what did we do to make it work?  While Java Update 2012-006 removed the Java Preferences panel, it leaved in place OS X's command-line Java infrastructure. Thus, it is possible to invoke the Apple JDK 6 to start the JDeveloper installer. Suppose your user is named « Fred », and that the JDeveloper installer is on your desktop. You can execute the following command in a terminal window (on a single line) to start the installer:  /usr/libexec/java_home --version 1.6.0  --exec java -jar /Users/Fred/Desktop/jdevstudio11116install.jar  The JDeveloper installer, being provided a valid JDK reference, will set up the IDE and embedded WebLogic Server instance accordingly. Clever engineering at its finest!

    Read the article

  • Oracle Database In-Memory

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Larry Ellison unveiled the next major milestone in database technology, Oracle Database In-Memory, on June 10, 2014. Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported. This option will accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP). It allows any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes. Benefits Fast ad-hoc analytics without the need to pre-create indexes Completely transparent to existing applications Faster mixed workload OLTP No database size limit Industrial strength availability and security Robustness and maturity of Oracle Database 12c To find out more see Oracle Database In-Memory Comment from Rittman Mead on Oracle In-Memory Option Launch  ... and I will let you know how this unfolds in regards to advantages for OBI11g and Exalytics and Big Data over the coming months. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Real-Time Multi-User Gaming Platform

    - by Victor Engel
    I asked this question at Stack Overflow but was told it's more appropriate here, so I'm posting it again here. I'm considering developing a real-time multi-user game, and I want to gather some information about possibilities before I do some real development. I've thought about how best to ask the question, and for simplicity, the best way that occurred to me was to make an analogy to the field (or playground) game darebase. In the field game of darebase, there are two or more bases. To start, there is one team on each base. The game is a fancy game of tag. When two people meet out in the field, the person who left his base most recently timewise captures the other person. They then return to that person's base. Play continues until everyone is part of the same team. So, analogizing this to an online computer game, let's suppose there are an indefinite number of bases. When a person starts up the game, he has a team that is located at, for example, his current GPS coordinates. It could be a virtual world, but for sake of argument, let's suppose the virtual world corresponds to the player's actual GPS coordinates. The game software then consults the database to see where the closest other base is that is online, and the two teams play their game of virtual tag. Note that the user of the other base could have a different base than the one run by the current user as the closest base to him, in which case, he would be in two simultaneous battles, one with each base. When they go offline, the state of their players is saved on a server somewhere. Game logic calls for the players to have some automaton-logic of some sort, so they can fend for themselves in a limited way using basic rules, until their user goes online again. The user doesn't control the players' movements directly, but issues general directives that influence the players' movement logic. I think this analogy is good enough to frame my question. What sort of platforms are available to develop this sort of game? I've been looking at smartfoxserver, but I'm not convinced yet that it is the best option or even that it will work at all. One possibility, of course, would be to roll out my own web server, but I'd rather not do that if there is an existing service out there already that I could tap into. I will be developing for iOS devices at first. So any suggestions would be greatly appreciated. I think I need to establish the architecture first before proceeding with this project. Note that darbase is not the game I intend to implement, but, upon reflection, that might not be a bad idea either.

    Read the article

  • What could be the best way to generalize data from Facebook and Twitter?

    - by Sjaak van der Heide
    I am not sure if this is the best subsite to ask this question, but I'm pretty sure it doesn't fit on the normal or facebook SO page... I've been asked to make a general API for connecting to several Social Media platforms (at the moment Facebook and Twitter). I have already realised both of them seperately. Meaning I retrieve the data I need from both Facebook and Twitter and hold the data in it's own dataclass. In my case a list of FacebookTimelineItems and a list of TwitterTimelineItems. now the hard part is taking the parts that are used in both (username, id, message and such) and make 1 general class that is eventually passed on to who/whatever sent the call to my API. these are two pics of the data classes I have: http://imageshack.us/photo/my-images/703/facebookdata.png/ http://imageshack.us/photo/my-images/204/twitterdata.png/ probably not 100% correct but it gives an idea what it looks like. Now I've been having several idea about how to go about and generalize the two, which is harder then I thought at first. Create an interface (TimelineItem) and let the other classes extend that one. this way I'll always be sure I have a class that contains at least the basic info I need. downside is that deserializing the JSON seems to be a nightmare. Use the two dataclasses I have and combine them into a new class afterwards, then pass that one back to whoever requested it. This would probably work but I get the idea it's not the best way to tackle this problem, and is pretty dodgy IF I get it working. Or, in case of the other two being nearly impossible. Keep the two seperated in the front end, and go sit in the corner crying because I've just figured out you can't lump together facebook and twitter... Note: I don't have to make the front end part (view), I just make sure the Model is nicely filled with data :) I hope I placed this in the right section, if I didn't I apologise and would like to know where I should go with my question. Thanks in advance for any replied/ideas/opinions on this.

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • Big Companies Influence Retail in 2010

    - by David Dorf
    From a retail industry perspective, 2010 will go down as the year mobile went mainstream, the economy recovered from the crash, and Facebook surpassed Google as the most influential online property. While the economy certainly had the biggest impact on the retail industry, a few big companies also exerted influence. Here's a rundown and a look back at 2010: Apple -- Steve Jobs and company continued to lead the mobile pack. Consumers are using their iPhones to shop, retailers are using the iPod Touch for mobile checkout, and both are embracing the iPad as the next wave of technology. The Next Technology from Apple Mobile Platforms in Retail Apple Stores, Touch2Systems, and the iPad Google -- Not to be outdone, Google's Android platform grew faster than Apple's, plus they support QRCodes natively and will probably beat Apple to NFC. Google Checkout, Product Search, and Boutiques.com continue to impact the e-commerce scene. Google Leverages Like.com Facebook -- While the movie The Social Network certainly made Facebook a household name, Connect, Places, and seeing the "like" button all over the Web really pushed Facebook everywhere. 2010 set the foundations for f-commerce. Facebook Participatory Promotions Crowd Savers What's the value of a Facebook fan? Step Aside Google Leveraging Social Networks for Retail Social Shopping at Nine West Groupon -- This newcomer executed on a simple concept flawlessly, making them the fasted company to reach $1B in revenue. (See cool chart from Silicon Alley Insider.) Google's offer of $5-6B wasn't enough, so now they are raising an additional $1B in funding, presumably to buy-up all the copycats across the globe. Changing the Way We Shop Amazon -- As if leading the e-commerce charge wasn't enough, Amazon shook things up with their purchase of Woot and release of their Price Checker mobile app. They continue to push boundaries with Kindle, and don't seem worried about the iPad at all. You Can't Win on Price Amazon Looks at Your Social Graph eBay -- Acquiring Skype didn't exactly work out, but eBay's purchase of PayPal and RedLaser are driving the company forward. They are still a major force. Bump the Bill Oracle, SAP, HP, IBM, and Cisco left their marks on the retail industry as well with various acquisitions and CxO shake-ups. We'll just have to wait and see what 2011 brings next.

    Read the article

  • Goodbye, Spreadsheets and Hello Modern ERP

    - by Christine Randle
    By: Steve Cox, Vice President, Oracle Accelerate for Midsize Companies     Signs of the resurging economy continue to sprout, with green shoots rising across different sectors and industries. With the economy on the rebound, businesses are increasing their investment in technology to keep up with growth and evolving demands; as proof, Gartner recently increased its worldwide IT spending forecast for 2012 to $3.6 trillion, anticipating a 3 percent increase from 2011 spending.   One of the segments most reliant on technology to catapult growth is midsize companies – established businesses leveraging every competitive efficiency and advantage to compete with much larger enterprises. We find that to compete against the big guys, they need to create an internal technology infrastructure to fuel that growth. Goodbye, spreadsheets and hello modern ERP.   While many businesses postponed upgrading or replacing financial and HR management systems during the recession, now some have started dusting off RFPs and revisiting technology options. Years ago, midsize organizations used spreadsheet-based systems and processes to manage employees, customers, partners, products and revenue. We’ve found that as companies scale up, they are apt to avoid heavily customizing their existing systems, and instead are more prone to standardize on a modern, enterprise-class ERP system.   Modern ERP platforms enable growing companies to immediately address the most pressing challenges – accounting, talent management, customer retention, et. al. Midsize companies implement these systems and processes to help them earn more, go public or expand globally.   And today, choice is a primary factor when selecting an ERP solution. Businesses have more deployment options now than ever before, depending on their unique structures and needs. Whether the preference is on demand, cloud, hosted or on premise, a modular, scalable deployment is available to meet the need.   With modern ERP systems, business that once struggled to do more with fewer resources have access to the same quality tools as larger competitors. By adopting top tier ERP systems tailored to individual business needs, midsize companies can support business operations while creating an enterprise system that seamlessly scales up to fuel future growth. Meaning that the ERP decision that your company makes today, will have legs to serve your business for years to come.

    Read the article

  • Sync Google Calendar with SharePoint Calendar

    - by dataintegration
    The ADO.NET Providers for Google and SharePoint make it easy to retrieve and update data in both Google's web services and SharePoint. This article shows how the SQL interface to data makes it easy to build applications that need to move data from one source to another. The application described here is a demo Windows application that synchronizes calendar events between Google and SharePoint, but the RSSBus Providers can be used to achieve integrations on both the .NET and the Java platforms, including more sophisticated features like full automation. Getting the Events Step 1: Google accounts can have several calendars. Obtain a list of a user's Google Calendars by issuing a query to the Calendars table. For example: SELECT * FROM Calendars. Step 2: In order to get a list of the events from a given Google Calendar, issue a query to the CalendarEvents table while specifying the CalendarId from the Calendars table. The resulting events can be further filtered by using the StartDateTime or EndDateTime columns. For example: SELECT * FROM CalendarEvents WHERE (CalendarId = '[email protected]') AND (StartDateTime >= '1/1/2012') AND (StartDateTime <= '2/1/2012') Step 3: SharePoint stores data in Lists. There are various types of lists, e.g., document lists and calendar lists. A SharePoint account can have several lists of the same type. To find all the calendar lists in SharePoint, use the ListLists stored procedure and inspect the BaseTemplate column. Step 4: The SharePoint data provider models each SharPoint list as a table. Get the events in a particular calendar by querying the table with the same name as the list. The events may be filtered further by specifying the EventDate or EndDate columns. For example: SELECT * FROM Calendar WHERE (EventDate >= '1/1/2012') AND (EventDate <= '2/1/2012') Synchronizing the Events Synchronizing the events is a simple process. Once the events from Google and SharePoint are available they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete events as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for SharePoint V2, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the SharePoint ADO.NET Data Provider V2, which can be obtained here.

    Read the article

  • Traditional POS is Dead

    - by David Dorf
    Traditional POS is dead -- I've heard that one before. Here's an excerpt from Joe Skorupa's blog over at RIS where he relayed ten trends that were presented at NRF. 7. Mobile POS signals death of traditional POS. Shoppers don't love self-checkout, but they prefer it to long queues or dealing with associates. Fixed POS is expensive and bulky. Mobile POS frees floor space for other purposes and converts associates from being cashiers to being sales assistants that provide new levels of customer service and incremental basket sales. In addition to unplugging the POS, new alternatives are starting to take hold - thin client, POS as a service, and replacing POS software with e-commerce platforms. I'll grant that in some situations for some retailers there might be an opportunity to to ditch the traditional POS, but for the majority of retailers that's just not practical. Take it from a guy that had to wake up at 3am after every Thanksgiving to monitor POS systems across the US on Black Friday. If a retailer's website goes down on Black Friday, they will take a significant hit. If a retailer's chain-wide POS system goes down on Black Friday, that retailer will cease to exist. Mobile POS works great for Apple because the majority of purchases are one or two big-ticket items that don't involve cash. There's still a traditional POS in every store to fall back on (its just hidden). Try this at home: Choose your favorite e-commerce site and add an item to the cart while timing how long it takes. Now multiply that by 15 to represent the 15 items you might buy at store like Target. The user interface isn't optimized for bulk purchases, and that's how it should be. The webstore and POS are designed for different purposes. Self-checkout is a great addition to POS and so is mobile checkout. But they add capabilities to POS, not replace it. Centralized architectures, even those based in the cloud, are quite viable as long as there's resiliency in the registers. You cannot assume perfect access to the network, so a POS must always be able to sell regardless of connectivity. Clearly the different selling channels should be sharing common functionality. Things like calculating tax, accepting coupons, and processing electronic payments can be shared, usually through a service-oriented architecture. This lowers costs and providers greater consistency, both of which help retailers. On paper these technologies look really good and we should continue to push boundaries, but I'm not ready to call the patient dead just yet.

    Read the article

  • Best Frameworks/libraries/engines for 2D multiplayer C# Webbased RPG

    - by Thirlan
    Title is a mouthful but important because I'm looking to meet a specific criteria and it's complex enough that I need a lot of help in finding what I'm looking for. I really want people's suggestions because I trust it a lot more than anything else, so I just need to clearly define what it is I want heh : P Game is a 2D RPG. Think of Secret of Mana. Game is online multiplayer, but not MMO sized. Game must be webbased! I'm looking to the future and want to hit as many platforms as possible. I'm leaning to Webgl because of this, but still looking around. Since the users are seeing the game through the webbrowser the front-end should be mainly responsible for drawing, taking input and some basic checking such as preliminary collision detection. This is important because it means the game engine is NOT on the client's machine. The server should be responsible for the game engine and all the calculations. This means the server is doing all the work and the client is mostly a dumb terminal. Server language is c# I'm looking for fast project execution so I want to use as many pre-existing tools as possible. This would make sense because I'm making a game here, not an engine. I'm not creating some new revolutionary graphics or pushing the physics engines to the next level. Preference for commercially supported tools. For game mechanics reasons and for reasons 4 and 5, don't think I can use existing 2D rpg engines. I've seen them out there and I fear that if I try and use them they will have too many restrictions, but will be happy to hear out suggestions. So all this means I need a game engine on the server, or maybe just a physics engine, and then I need another engine/library to draw everything that the server is sending to the client on the webbrowser. Maybe this is how 50% of games work on the web and there are plenty of frameworks that support this! I wouldn't actually don't know : ( but my gut is telling me that most webgames are single player and 90% of the game is running on the client. So... any suggestions?

    Read the article

  • So, I though I wanted to learn frontend/web development and break out of my comfort zone...

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

  • Switch from back-end to front-end programming: I'm out of my comfort zone, should I switch back?

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >