Search Results

Search found 5543 results on 222 pages for 'legacy terms'.

Page 32/222 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • What are some options and methods to link a contact form on WordPress to an existing form processing script?

    - by eirlymeyer
    I’m searching for the best way to link the outgoing/output data in a WordPress contact form plugin on a WordPress website to an existing MySQL database where a contact form is processed. Scenario: A new site (Site A) is being developed with a contact form. Site B (old site) uses a contact form script to process contact form leads through an existing legacy database and a ColdFusion application. The goal is to create site A with a new contact form to continue the same existing processes. Site A is to become the new Site B.

    Read the article

  • Good, simple reasons for having a multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't phased because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I cant really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • Google I/O 2012 - Introducing the Google Drive SDK

    Google I/O 2012 - Introducing the Google Drive SDK In this talk, we will introduce a number of major new features and platforms to the Google Drive SDK. We will discuss what we feel is a revolution in the way developers write collaborative applications. We will also announce a new API to make managing files in Google Drive even easier for developers, replacing some legacy APIs in the process. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 2447 16 ratings Time: 46:28 More in Science & Technology

    Read the article

  • Next Post...

    - by James Michael Hare
    The next post on the concurrent collections will be next Monday.  I'm a little behind from my Topeka trip earlier this week, so sorry about the delay! Also, I was thinking about starting a C++ Little Wonders series as well.  Would anyone have an interest in that topic?  I primarily use C# in my development work, but there is still a lot of legacy C++ I work on as well and could share some tips & tricks.

    Read the article

  • Rules of Holes #4 -Do You Have the BIG Picture?

    - by ArnieRowland
    Some folks decry the concept of being in a 'Hole'. For them, there is no such thing as 'Technical Debt', no such thing as maintaining weak and wobbly legacy code, no such thing as bad designs, no such thing as under-skilled or poorly performing co-workers, no such thing as 'fighting fires', or no such thing as management that doesn't share the corporate vision. They just go to work and do their job, keep their head down, and do whatever is required. Mostly. Until the day they are swallowed by the...(read more)

    Read the article

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Good, simple reasons for having multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't fazed because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I can't really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • Tech Mahindra Applications Consolidation Project

    - by Javier Puerta
    “With Oracle’s end-to-end hardware and software solutions, we seamlessly migrated 22 applications from the legacy platform to the new platform in just seven weeks. Thanks to Oracle, we gained an integrated view of enterprisewide data across 49 locations and increased storage capacity by 25%, enabling us to improve service delivery and support our revenue-growth target.” - Ved Prakash Nirbhya, CIO, Tech Mahindra Limited Read full story details here

    Read the article

  • Cover Feature: "United Development"

    Developers need solutions, and there's no shortage of language and technology choices. Whether you're making development choices for applications that connect with legacy mainframe systems or new Web 2.0-enabled applications, standards and integration are key. Read about the standards-based tools and development solutions from Oracle that integrate your business processes.

    Read the article

  • .NET – ArrayList hidden gem

    - by nmgomes
    From time to time I end-up finding really old hidden gems and a few days ago I found another one. IList System.Collections.ArrayList.ReadOnly(IList list) This amazing method is available since the beginning (.NET 1.0). I always complain about the small support for ReadOnly lists and collections and I have no clue why I miss this one. For those of you that have to maintain and extend legacy applications prior to ASP.NET 2.0 SP2 this could be a very useful finding.

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Tutorial: Why Use GRUB2? Good Question! (part 3)

    As we come to the end of Akkana Peck's excellent series on mastering GRUB2, it's not clear what advantages it has over legacy GRUB, or even good old LILO. It seems it's gone backwards. In today's installment we learn how to translate some common and mysterious error messages, and how to manage a multi-boot system with GRUB2.

    Read the article

  • New Exadata Customer Cases

    - by Javier Puerta
    New reference stories available for Exadata: Procter & Gamble Completes Point-of-Sale Data Queries up to 30 Times Faster, Reduces IT Costs, and Improves Insight with Engineered Data Warehouse Solution ZLM Verzekeringen Improves Customer Service with Integrated Back-Office Environment on Exadata KyivStar, JSC Reduces Storage Volumes to 15% of Its Legacy Environment and Increases System Productivity by 500% with High-Performance IT Infrastructure GfK Group Retail and Technology ensures Successful Growth with Exadata Consolidation

    Read the article

  • Communication Between Different Technologies in a Distributed Application

    - by sjtaheri
    I had to a incorporate several legacy applications and services in a network-distributed application. The existing services and applications are written using different languages and technologies, including: java, C#.Net and C++; all running on MS Windows machines. Now I'm wondering about the communication mechanism between them. What is the simple and standard way? Thanks! PS. communications include simple message sending and remote method invocations.

    Read the article

  • Why Use GRUB2? Good Question! (part 3)

    <b>Linux Planet:</b> "As we come to the end of Akkana Peck's excellent series on mastering GRUB2, it's not clear what advantages it has over legacy GRUB, or even good old LILO. It seems it's gone backwards. In today's installment we learn how to translate some common and mysterious error messages, and how to manage a multi-boot system with GRUB2."

    Read the article

  • Rules of Holes #4: Do You Have the BIG Picture?

    - by ArnieRowland
    Some folks decry the concept of being in a 'Hole'. For them, there is no such thing as 'Technical Debt', no such thing as maintaining weak and wobbly legacy code, no such thing as bad designs, no such thing as under-skilled or poorly performing co-workers, no such thing as 'fighting fires', or no such thing as management that doesn't share the corporate vision. They just go to work and do their job, keep their head down, and do whatever is required. Mostly. Until the day they are swallowed by the...(read more)

    Read the article

  • Hows to set Skype shortcut for opening existing instance?

    - by Koffeehaus
    I'm using Linux for about two years now, but due to my Windows legacy I like keeping shortcuts on my desktop instead of dockies, panel shortcuts, etc. If Skype is already running, pressing the shortcut starts a new sequence rather than opening an existing one. This is kinda cool as you can have two accounts running. But I only have one. So, my question is whether it is possible to tweak Skype into opening an already existent instance when pressing the shortcut?

    Read the article

  • What parts of functionality should be refactored into a directive?

    - by Sprottenwels
    I am creating an application from legacy code using AngularJS. I wonder what parts of my code should be moved into a directive. For example, iI had thought of moving a table which is used multiple times across the application into a directive. The tables alter from headings and size. Is it worth the effort or even a good practice to turn such things into their own directives or should I create each table in a unique way?

    Read the article

  • Welcome Xsigo Partners

    - by Cinzia Mascanzoni
    Oracle recently achieved LEC for Xsigo Systems, Inc. and the migration of legacy partners is underway. Welcome Kits were distributed to partners providing details on how to join Oracle PartnerNetwork and next steps to jump start their business with Oracle. To find out more about the transition of Xsigo Partners, view the recently updated FAQ here.

    Read the article

  • Whar parts of functionality should be refactored into a directive? [AngularJS]

    - by Sprottenwels
    I am creating an application from legacy code using AngularJS. I wonder what parts of my code should be moved into a directive. For example, i had thought of moving a table which is used multiple times across the application into a directive. The tables alter from headings and size. Is it worth the effort or even a good practice to turn such things into their own directives or should i create each table in a unique way?

    Read the article

  • RouterOS on Hyper-V (v3/2012) - any way to get it working?

    - by TomTom
    Trying to set up a small VPN point to connect into a remote Hyper-V cluster using ROuterOS. Anyone got it working ON Hyper-V with the latest builds of RouterOS? It seems the legacy network adapter is not supported anymore either (or just broken). The platform is a Windows Server 2012 RC. This is not a high performance setup - the RouterOS wont do the routing for more than the backend administrative access, and the only real traffic we will see there is when ISO images for new operating systems are uploaded. Otherwise we will have possibly RDP traffic as well as web / http traffioc, but this is internal only (dashboards, some control panel). The server has no public business. So the price for non virtualized network cards is ok for me. After hooking up - ping just does not work. After some time I see in windows (arp -a on the command line), so I know that the Hyper-V side is set up properly. Just no packets arrived. I have turned off all protection on Hyper-V (or : not turned them on), so no MAC spoofing protection etc. in the Advanced page for the legacy adapters. Unless I can get it work I will have to resort to using a windows install as router / VPN endpoint, which introduces another OS into the fabric (we run all routers etc. so far on mikrotik in hardware, which is why I want this one to be RouterOS, too). And no, putting hardware there is NOT an option - the cost would be significant.

    Read the article

  • Connectivity with SQL Server Express 2008 r2 and SQL Server 2000 on same machine

    - by Jim R
    At first glance this may same a duplicate of Installing both SQL Server 2000 and SQL Server 2008 on the same machine, but it is not. I have SQL Server 2000 and SQL Server 2008 R2 installed on the same machine and working fine. My problem lies with connecting to the 2008 R2 server from a remote machine. My connectivity needs to be TCP. The legacy installation or SQL 2000 uses the default port of 1433. The named instance is by default configured to use 'Shared Memory' and is working fine. When I configured the 2008 R2 server to use 1433 (I did not think that thru) the service refused to start becasue 1433 was already in use by the legacy SQL 2000 default instance. Doh! What I want to do is have both servers available simultaneously via TCP. both servers need not be on the same port, put if I cannot run them on the same port, then how do I configure the clients? Is there not some kind of proxy available that can monitor the 1433 port and pass the request thru to the correct SQL instance by name? Is this capability built into SQL server already? Thanks, Jim

    Read the article

  • Boot ISO image from GRUB4DOS on EFI machines

    - by Vladimir Tikhomirov
    I failed with loading ISO image (non-distro) from GRUB2 from USB stick, but found the way how I can boot the GRUB4DOS and then load the image from there. However, it doesn't work all the time and the questions is WHY it doesn't? Environment and loading process: We need to have EFI machine, USB stick, booting ISO, GRUB2 and GRUB4DOS. Last 3 on USB stick. Boot: USB - EFI loader - GRUB2 - GRUB4DOS - ISO image Configuration files To boot GRUB4DOS I use this from grub.cfg: menuentry "image.iso" { linux /syslinux/grub.exe --config-file="/menu.lst" } My menu.lst is here: timeout 20 default 0 title image.iso find --set-root --ignore-floppies --ignore-cd //image.iso map --heads=0 --sectors-per-track=0 //image.iso (hd32) map --hook chainloader (hd32) This works perfectly with Legacy machines. However, when I come to GRUB4DOS, I don't see the menu with image.iso, I see only GRUB command line. That means that my menu.lst didn't load. Why is it like this? Background and ideas I have an idea that GRUB4DOS doesn't recognize my USB stick as a device. I tried the command find and got (hd0,0), (hd0,1), (hd0,2), (rd). When I tried to set root to any of these devices I don't see fat file system, how it was with Legacy machines. The root device is (hd0,0), which has ntfs file system which should be partition with Windows. EFI machines support only GRUB2, so I can't boot GRUB4DOS straight away. Please, don't suggest anything like this, because my image doesn't have kernel. You can imagine that you load HDAT2 or Hiren's boot cd, for example. menuentry "Blancco Blancco5.iso" { set isofile="/image.iso" loopback loop $isofile set root=(loop) linux /isolinux/vmlinuz isofile=$isofile splash quiet initrd /isolinux/initrd }

    Read the article

  • Set default MySQL connect charset for PHP (in RHEL)?

    - by Martijn Heemels
    We're running a hundred or so legacy PHP websites on an older server which runs Gentoo Linux. When these sites were built latin1 was still the common charset, both in PHP and MySQL. To make sure those older sites used latin1 by default, while still allowing newer sites to use utf8 (our current standard), we set the default connect charset in php.ini: mysql.connect_charset = latin1 mysqli.connect_charset = latin1 pdo_mysql.connect_charset = latin1 Specific more modern sites could override this in their bootstrapping code with: <?php mysql_set_charset("utf8", $dsn ); ...and all was well. Now the server is overloaded and we're no longer with that hoster, so we're moving all these sites to a faster server at our standard hoster, which uses RHEL 5 as their OS of choice. In setting up this new server I discover to my surprise that the *.connect_charset directives are a Gentoo specific patch to PHP, and RHEL's version of PHP doesn't recognize them! Now how do I set PHP to connect to MySQL with the latin1 charset? I thought about setting a default in my.cnf but would prefer not to force every app and client to default to latin1. Our policy is to use utf8, and we'd like to restrict the exception to PHP only. Also, converting every legacy site to properly use utf8 is not doable since many are of the touch 'm and you break 'm kind. We simply don't have the time to go fix them all. How would I set a default mysql/mysqli/pdo_mysql connection charset to latin1 for PHP, while still allowing individual scripts to override this to utf8 with mysql_set_charset()?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >