Search Results

Search found 3638 results on 146 pages for 'major'.

Page 28/146 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Orchestrating the Virtual Enterprise, Part II

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's CSO & Vice President, SCM Product Strategy Almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value. Jon ChorleyChief Sustainability Officer & Vice President, SCM Product StrategyOracle Corporation

    Read the article

  • Oracle Tutor: Installing Is Not Implementing or Why CIO's should care about End User Adoption

    - by emily.chorba(at)oracle.com
    Eighteen months ago I showed Tutor and UPK Productive Day One overview to a CIO friend of mine. He works in a manufacturing business which had been recently purchased by a global conglomerate. He had a major implementation coming up, but said that the corporate team would be coming in to handle the project. I asked about their end user training approach, but it was unclear to him at the time. We were in touch over the course of the implementation project. The major activities were data conversion, how-to workshops, General Ledger realignment, and report definition. The message was "Here's how we do it at corporate, and here's how you are going to do it." In short, it was an application software installation. The corporate team had experience and confidence and the effort through go-live was smooth. Some weeks after cutover, problems with customer orders began to surface. Orders could not be fulfilled in a timely fashion. The problem got worse, and the corporate emergency team was called in. After many days of analysis, the issue was tracked down and resolved, but by then there were weeks of backorders, and their customer base was impacted in a significant way. It took three months of constant handholding of customers by the sales force for good will to be reestablished, and this itself diminished a new product sales push. I learned of these results in a recent conversation with the CIO. I asked him what the solution to the problem was, and he replied that it was twofold. The first component was a lack of understanding by customer service reps about how a particular data item in order entry was to be filled in, resulting in discrepant order data. The second component was that product planners were using this data, along with data from other sources, to fill in a spreadsheet based on the abandoned system. This spreadsheet was the primary input for planning data. The result of these two inaccuracies was that key parts were not being ordered to effectively meet demand and the lead time for finished goods was pushed out by weeks. I reminded him about the Productive Day One approach, and it's focus on methodology and tools for end user training. A more collaborative solution workshop would have identified proper applications use in the new environment. Using UPK to document correct transaction entry would have provided effective guidelines to the CSRs for data entry. Using Oracle Tutor to document the manual tasks would have eliminated the use of an out of date spreadsheet. As we talked this over, he said, "I wish I knew when I started what I know now." Effective end user adoption is the most critical and most overlooked success factor in applications implementations. When the switch is thrown at go-live, employees need to know how to use the new systems to do their jobs. Their jobs are made up of manual steps and systems steps which must be performed in the right order for the implementing organization to operate smoothly. Use Tutor to document the manual policies and procedures, use UPK to document the systems tasks, and develop this documentation in conjunction with a solution workshop. This is the path to develop effective end user training material for a smooth implementation. Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Chuck Jones, Product Manager, Oracle Tutor and BPM

    Read the article

  • Windows Phone 7 Series - Tools and Resources

    - by TechTwaddle
    Unless you've been living in the caves of Lascaux for the past couple of days, you probably know what's happening in the world of Windows Phone. Microsoft unveiled the developer tools required to develop applications and games for Windows Phone 7 at MIX10 a couple of days back. Silverlight and XNA being the major frameworks, no big surprise there. And the best news of all is that all the development tools are free! So if you are planning to develop apps for Windows Phone 7, read on. The first place, or more appropriately hub, for you is the Windows Phone Developer Portal. It has most of the information you need to get you started. Now there is a ton of information available at other places too. In this post, I take time to put all the information that I found useful at one place, and I'll keep updating this as and when I find new stuff.   Setting up the development environment 1. Install Windows Phone Developer Tools CTP (Community Technology Preview) This will install Visual Studio 2010 Express, Silverlight, XNA framework and emulator for Windows Phone 7. It also installs a few support tools. 2. Expression Blend 4 for Windows Phone:     - Install Expression Blend 4 beta     - Install Expression Blend Add-in Preview for Windows Phone     - Install Expression Blend SDK Preview for Windows Phone Installing the above tools should set your machine up for development. I installed the tools on my Windows Vista SP1 machine and the process went smoothly without running into any major hitch. Note that the tools won't install on Windows XP, read the release notes of the CTP. Resources and Documentation 1. Microsoft Windows Phone 7 Series Developer Training Kit 2. Programming Windows Phone 7 Series by Charles Petzold. Contains few chapters only. Gives a good preview. 3. MSDN documentation for Windows Phone 7 Development 4. A sample chapter from Learning Windows Phone Programming [PDF] by Yochay Kiriaty and Jaime Rodriguez. Complete book will be available at a later time. 5. Windows Phone 7 Developer Forum - where you can ask questions and problems you run into and the experts are there to help you. 6. For Silverlight visit silverlight.net and for XNA game development, the XNA Creators Club is the place to go, also make sure you follow Michael Klutcher's and Shawn Hargreaves' blog. 7. And finally the MIX'10 website. Most of the sessions will be available for download later (some are already available). Click on the Windows Phone tag to get all the session details and downloads.   If you are completely new to Silverlight and XNA (like me), and C# makes some sense to you then I suggest you go through the Developer Training Kit. It gives a good start and ramps you up pretty quickly.

    Read the article

  • SQL SERVER – Select the Most Optimal Backup Methods for Server

    - by pinaldave
    Backup and Restore are very interesting concepts and one should be very much with the concept if you are dealing with production database. One never knows when a natural disaster or user error will surface and the first thing everybody wants is to get back on point in time when things were all fine. Well, in this article I have attempted to answer a few of the common questions related to Backup methodology. How to Select a SQL Server Backup Type In order to select a proper SQL Server backup type, a SQL Server administrator needs to understand the difference between the major backup types clearly. Since a picture is worth a thousand words, let me offer it to you below. Select a Recovery Model First The very first question that you should ask yourself is: Can I afford to lose at least a little (15 min, 1 hour, 1 day) worth of data? Resist the temptation to save it all as it comes with the overhead – majority of businesses outside finances can actually afford to lose a bit of data. If your answer is YES, I can afford to lose some data – select a SIMPLE (default) recovery model in the properties of your database, otherwise you need to select a FULL recovery model. The additional advantage of the Full recovery model is that it allows you to restore the data to a specific point in time vs to only last backup time in the Simple recovery model, but it exceeds the scope of this article Backups in SIMPLE Recovery Model In SIMPLE recovery model you can select to do just Full backups or Full + Differential. Full Backup This is the simplest type of backup that contains all information needed to restore the database and should be your first choice. It is often sufficient for small databases, but note that it makes a big impact on the performance of your database Full + Differential Backup After Full, Differential backup picks up all of the changes since the last Full backup. This means if you made Full, Diff, Diff backup – the last Diff backup contains all of the changes and you don’t need the previous Differential backup. Differential backup is obviously smaller and carries less performance overhead Backups in FULL Recovery Model In FULL recovery model you can select Full + Transaction Log or Full + Differential + Transaction Log backup. You have to create Transaction Log backup, because at that time the log is being truncated. Otherwise your Transaction Log will grow uncontrollably. Full + Transaction Log Backup You would always need to perform a Full backup first. Then a series of Transaction log backup. Note that (in contrast to Differential) you need ALL transactions to log since the last Full of Diff backup to properly restore. Transaction log backups have the smallest performance overhead and can be performed often. Full + Differential + Transaction Log Backup If you want to ease the performance overhead on your server, you can replace some of the Full backup in the previous scenario with Differential. You restore scenario would start from Full, then the Last Differential, then all of the remaining transactions log backups Typical backup Scenarios You may say “Well, it is all nice – give me the examples now”. As you may already know, my favorite SQL backup software is SQLBackupAndFTP. If you go to Advanced Backup Schedule form in this program and click “Load a typical backup plan…” link, it will give you these scenarios that I think are quite common – see the image below. The Simplest Way to Schedule SQL Backups I hate to repeat myself, but backup scheduling in SQL agent leaves a lot to be desired. I do not know the simple way to schedule your SQL server backups than in SQLBackupAndFTP – see the image below. The whole backup scheduling with compression, encryption and upload to a Network Folder / HDD / NAS Drive / FTP / Dropbox / Google Drive / Amazon S3 takes just a few minutes – see my previous post for the review. Final Words This post offered an explanation for major backup types only. For more complicated scenarios or to research other options as usually go to MSDN. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

    - by pinaldave
    Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything! This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.” This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules. I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version. MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence. MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information. Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading. In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Master Data Services, MDM

    Read the article

  • C# 5 Async, Part 2: Asynchrony Today

    - by Reed
    The .NET Framework has always supported asynchronous operations.  However, different mechanisms for supporting exist throughout the framework.  While there are at least three separate asynchronous patterns used through the framework, only the latest is directly usable with the new Visual Studio Async CTP.  Before delving into details on the new features, I will talk about existing asynchronous code, and demonstrate how to adapt it for use with the new pattern. The first asynchronous pattern used in the .NET framework was the Asynchronous Programming Model (APM).  This pattern was based around callbacks.  A method is used to start the operation.  It typically is named as BeginSomeOperation.  This method is passed a callback defined as an AsyncCallback, and returns an object that implements IAsyncResult.  Later, the IAsyncResult is used in a call to a method named EndSomeOperation, which blocks until completion and returns the value normally directly returned from the synchronous version of the operation.  Often, the EndSomeOperation call would be called from the callback function passed, which allows you to write code that never blocks. While this pattern works perfectly to prevent blocking, it can make quite confusing code, and be difficult to implement.  For example, the sample code provided for FileStream’s BeginRead/EndRead methods is not simple to understand.  In addition, implementing your own asynchronous methods requires creating an entire class just to implement the IAsyncResult. Given the complexity of the APM, other options have been introduced in later versions of the framework.  The next major pattern introduced was the Event-based Asynchronous Pattern (EAP).  This provides a simpler pattern for asynchronous operations.  It works by providing a method typically named SomeOperationAsync, which signals its completion via an event typically named SomeOperationCompleted. The EAP provides a simpler model for asynchronous programming.  It is much easier to understand and use, and far simpler to implement.  Instead of requiring a custom class and callbacks, the standard event mechanism in C# is used directly.  For example, the WebClient class uses this extensively.  A method is used, such as DownloadDataAsync, and the results are returned via the DownloadDataCompleted event. While the EAP is far simpler to understand and use than the APM, it is still not ideal.  By separating your code into method calls and event handlers, the logic of your program gets more complex.  It also typically loses the ability to block until the result is received, which is often useful.  Blocking often requires writing the code to block by hand, which is error prone and adds complexity. As a result, .NET 4 introduced a third major pattern for asynchronous programming.  The Task<T> class introduced a new, simpler concept for asynchrony.  Task and Task<T> effectively represent an operation that will complete at some point in the future.  This is a perfect model for thinking about asynchronous code, and is the preferred model for all new code going forward.  Task and Task<T> provide all of the advantages of both the APM and the EAP models – you have the ability to block on results (via Task.Wait() or Task<T>.Result), and you can stay completely asynchronous via the use of Task Continuations.  In addition, the Task class provides a new model for task composition and error and cancelation handling.  This is a far superior option to the previous asynchronous patterns. The Visual Studio Async CTP extends the Task based asynchronous model, allowing it to be used in a much simpler manner.  However, it requires the use of Task and Task<T> for all operations.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Columnar, Graph and Spatial Database – Day 14 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Key-Value Pair Databases and Document Databases in the Big Data Story. In this article we will understand the role of Columnar, Graph and Spatial Database supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (The day before yesterday’s post) NoSQL Databases (The day before yesterday’s post) Key-Value Pair Databases (Yesterday’s post) Document Databases (Yesterday’s post) Columnar Databases (Tomorrow’s post) Graph Databases (Today’s post) Spatial Databases (Today’s post) Columnar Databases  Relational Database is a row store database or a row oriented database. Columnar databases are column oriented or column store databases. As we discussed earlier in Big Data we have different kinds of data and we need to store different kinds of data in the database. When we have columnar database it is very easy to do so as we can just add a new column to the columnar database. HBase is one of the most popular columnar databases. It uses Hadoop file system and MapReduce for its core data storage. However, remember this is not a good solution for every application. This is particularly good for the database where there is high volume incremental data is gathered and processed. Graph Databases For a highly interconnected data it is suitable to use Graph Database. This database has node relationship structure. Nodes and relationships contain a Key Value Pair where data is stored. The major advantage of this database is that it supports faster navigation among various relationships. For example, Facebook uses a graph database to list and demonstrate various relationships between users. Neo4J is one of the most popular open source graph database. One of the major dis-advantage of the Graph Database is that it is not possible to self-reference (self joins in the RDBMS terms) and there might be real world scenarios where this might be required and graph database does not support it. Spatial Databases  We all use Foursquare, Google+ as well Facebook Check-ins for location aware check-ins. All the location aware applications figure out the position of the phone with the help of Global Positioning System (GPS). Think about it, so many different users at different location in the world and checking-in all together. Additionally, the applications now feature reach and users are demanding more and more information from them, for example like movies, coffee shop or places see. They are all running with the help of Spatial Databases. Spatial data are standardize by the Open Geospatial Consortium known as OGC. Spatial data helps answering many interesting questions like “Distance between two locations, area of interesting places etc.” When we think of it, it is very clear that handing spatial data and returning meaningful result is one big task when there are millions of users moving dynamically from one place to another place & requesting various spatial information. PostGIS/OpenGIS suite is very popular spatial database. It runs as a layer implementation on the RDBMS PostgreSQL. This makes it totally unique as it offers best from both the worlds. Courtesy: mushroom network Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Hive. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Production Access Denied! Who caused this rule anyways?

    - by Matt Watson
    One of the biggest challenges for most developers is getting access to production servers. In smaller dev teams of less than about 5 people everyone usually has access. Then you hire developer #6, he messes something up in production... and now nobody has access. That is how it always starts in small dev teams. I think just about every rule of life there is gets created this way. One person messes it up for the rest of us. Rules are then put in place to try and prevent it from happening again.Breaking the rules is in our nature. In this example it is for good cause and a necessity to support our applications and troubleshoot problems as they arise. So how do developers typically break the rules? Some create their own method to collect log files off servers so they can see them. Expensive log management programs can collect log files, but log files alone are not enough. Centralizing where important errors are logged to is common. Some lucky developers are given production server access by the IT operations team out of necessity. Wait. That's not fair to all developers and knowingly breaks the company rule!  When customers complain or the system is down, the rules go out the window. Commonly lead developers get production access because they are ultimately responsible for supporting the application and may be the only person who knows how to fix it. The problem with only giving lead developers production access is it doesn't scale from a support standpoint. Those key employees become the go to people to help solve application problems, but they also become a bottleneck. They end up spending up to half of their time every day helping resolve application defects, performance problems, or whatever the fire of the day is. This actually the last thing you want your lead developers doing. They should be working on something more strategic like major enhancements to the product. Having production access can actually be a curse if you are the guy stuck hunting down log files all day. Application defects are good tasks for junior developers. They can usually handle figuring out simple application problems. But nothing is worse than being a junior developer who can't figure out those problems and the back log of them grows and grows. Some of them require production server access to verify a deployment was done correctly, verify config settings, view log files, or maybe just restart an application. Since the junior developers don't have access, they end up bugging the developers who do have access or they track down a system admin to help. It can take hours or days to see server information that would take seconds or minutes if they had access of their own. It is very frustrating to the developer trying to solve the problem, the system admin being forced to help, and most importantly your customers who are not happy about the situation. This process is terribly inefficient. Production database access is also important for solving application problems, but presents a lot of risk if developers are given access. They could see data they shouldn't.  They could write queries on accident to update data, delete data, or merely select every record from every table and bring your database to its knees. Since most of the application we create are data driven, it can be very difficult to track down application bugs without access to the production databases.Besides it being against the rule, why don't all developers have access? Most of the time it comes down to security, change of control, lack of training, and other valid reasons. Developers have been known to tinker with different settings to try and solve a problem and in the process forget what they changed and made the problem worse. So it is a double edge sword. Don't give them access and fixing bugs is more difficult, or give them access and risk having more bugs or major outages being created!Matt WatsonFounder, CEOStackifyAgile Support for Agile Developers

    Read the article

  • Comparing Isis, Google, and Paypal

    - by David Dorf
    Back in 2010 I was sure NFC would make great strides, but here we are two years later and NFC doesn't seem to be sticking. The obvious reason being the chicken-and-egg problem.  Retailers don't want to install the terminals until the phones support NFC, and vice-versa. So consumers continue to sit on the sidelines waiting for either side to blink and make the necessary investment.  In the meantime, EMV is looking for a way to sneak into the US with the help of the card brands. There are currently three major solutions that are battling in the marketplace.  All three know that replacing mag-stripe alone is not sufficient to move consumers.  Long-term it's the offers and loyalty programs combined with tendering that make NFC attractive. NFC solutions cross lots of barriers, so a strong partner system is required.  The solutions need to include the carriers, card brands, banks, handset manufacturers, POS terminals, and most of all lots of merchants.  Lots of coordination is necessary to make the solution seamless to the consumer. Google Wallet Google's problem has always been that only the Nexus phone has an NFC chip that supports their wallet.  There are a couple of additional phones out there now, but adoption is still slow.  They acquired Zavers a while back to incorporate digital coupons, but the the bulk of their users continue to be non-NFC.  They have taken an open approach by not specifying particular payment brands.  Google is piloting in San Francisco and New York, supporting both MasterCard PayPass and stored value. I suppose the other card brands may eventually follow.  There's no cost for consumers or merchants -- Google will make money via targeted ads. Isis Not long after Google announced its wallet, AT&T, Verizon, and T-Mobile announced a joint venture called Isis.  They are in the unique position of owning the SIM in the phones they issue.  At first it seemed Isis was a vehicle for the carriers to compete with the existing card brands, but Isis later switched to a generic wallet that supports the major card brands.  Isis reportedly charges issuers a $5 fee per customer per year.  Isis will pilot this summer in Salt Lake City and Austin. PayPal PayPal, the clear winner in the online payment space beyond traditional credit cards, is trying to move into physical stores.  After negotiations with Google to provide a wallet broke off, PayPal decided to avoid NFC altogether, at least for now, and focus on payments without any physical card or phone.  By avoiding NFC, consumers don't need an NFC-enabled phone and merchants don't need a new reader.  Consumers must enter their phone number and PIN in the merchant's existing device, or they can enter their PIN in the PayPal inStore app running on their phone, then show the merchant a unique barcode which authorizes payment. Paypal is free for consumers and charges a fee for merchants.  Its not clear, at least to me, how PayPal handles fraudulent transactions and whether the consumer is protected. The wildcard is, of course, Apple.  Their mobile technologies set the standard, so incorporating NFC chips would certainly accelerate adoption of many payment solutions.  Their announcement today of the iOS Passbook is a step in the right direction, but stops short of handling payments. For those retailers that have invested in modern terminals, it seems the best strategy is to support all the emerging solutions and let the consumers choose the winner.

    Read the article

  • Selecting the correct installer to install Oracle Weblogic Server

    - by PratikS -- Oracle
    When ever we start learning about a software product, the first step is to get the software installer and install it.Before we start with "How to install Oracle Weblogic Server?", lets understand the different kinds of installers available for Oracle Weblogic Server and select the correct installer.There are three different kinds of Weblogic server Installers: Package Installer Development-only and supplemental installers Upgrade Installer 1) Package Installer: If you have never installed Oracle Weblogic Server and this is the first time you are installing it, then what you need is a Oracle Weblogic Server's Package Installer.Again there are two different kinds of Package Installers:    a) Generic Package installer:         It does not include JAVA runtime. (When using "Generic Package installer" it is a prerequisite that a supported JDK should be installed)         If you want to install weblogic server with 64bit JVM, you have to use "Generic Package installer".         "Generic Package installer" is platform independent and can be used to install weblogic server on any supported 32bit or 64bit platform.     b) OS-specific Package installer         As the name suggests the installer is platform specific.         It is meant for installation with a 32bit JVM only.         Both SUN and JROCKIT 32 bit JDKs come bundled with "OS-specific Package installer", so no need to install the JDK in advance. 2) Development-only and supplemental installers:         If you have no plans to use the Oracle Weblogic Server in Production and need a simple installer for testing purpose only, then use this installer.         Download the zip distribution, unzip it and its ready to use. 3) Upgrade Installer:         Upgrade installer is used to upgrade a Oracle Weblogic Server installation from one minor version to a higher minor version.         There are no installers available to upgrade Oracle Weblogic Server Installation from one major version to another, though Domain Upgrade is always available. Note:Following are the different versions of Oracle Weblogic Server in ascending order(excluding versions before WLS 9.2): WLS 9.2.x WLS 10.0.x WLS 10.3.x WLS 12.1.x Where "x" denotes the minor version, 9.2, 10.0,10.3 and 12.1 are the major versions.So you may use the upgrade installer to upgrade from WLS 10.3.1 to 10.3.6, or 10.0.1 to 10.0.2 etc.  ------------------------------------- Important links to refer: Oracle Weblogic Server Documentation Supported Configuration Installation Guide for Oracle WebLogic Server

    Read the article

  • Get to Know a Candidate (3 of 25): Virgil Goode&ndash;Constitution Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Meet Virgil Goode of the Constitution Party Goode was served as a Republican member of the United States House of Representatives from 1997 to 2009. He represented the 5th congressional district of Virginia. Goode was born in Richmond, Virginia, the son of Alice Clara (née Besecker) and Virgil Hamlin Goode. He has spent most of his life in Rocky Mount. Goode graduated with a B.A. from the University of Richmond (Phi Beta Kappa) and with a J.D. from the University of Virginia School of Law. He also is a member of Lambda Chi Alpha Fraternity and served in the Army National Guard from 1969 to 1975. Goode grew up as a Democrat. He entered politics soon after graduating from law school. At the age of 27, he won a special election to the state Senate from a Southside district as an independent after the death of the Democratic incumbent. One of his major campaign focuses at the time was advocacy for the Equal Rights Amendment. Soon after being elected, he joined the Democrats. Goode wore his party ties very loosely. He became famous for his support of the tobacco industry, expressing his fear that "his elderly mother would be denied 'the one last pleasure' of smoking a cigarette on her hospital deathbed." He was an ardent defender of gun rights while being an enthusiastic supporter of L. Douglas Wilder, who later became the first elected black governor in the history of the United States. At the Democratic Party's state political convention in 1985, Goode nominated Wilder for lieutenant governor. However, while governor, Wilder cracked down on the sale of guns in the state. After the 1995 elections resulted in a 20–20 split between Democrats and Republicans in the State Senate, Goode seriously considered voting with the Republicans on organizing the chamber. Had he done so, the State Senate would have been under Republican control for the first time since Reconstruction (the Republicans ultimately won control outright in 1999). Goode's actions at the time "forced his party to share power with Republican lawmakers in the state legislature," which further upset the Democratic Party. Goode is on the ballot in CA, FL, ID, IO, LA, MI, MN, MS, MI, NJ, NM, NY, NV, ND, OH, SC, SD, TN, UT, VA, WA, WI, WY.  He is a write-in candidate in CA, CT, DC, GA, IL, IN, ME, MD, MA, MO, NC, TX, VT, WV Constitution Party This party was founded as the “U.S. Taxpayers’ Party” and considers itself conservative. The party's platform is predicated on the principles of the nation's founding documents. The party puts a large focus on immigration, calling for stricter penalties towards illegal immigrants and a moratorium on legal immigration until all federal subsidies to immigrants are discontinued.The party absorbed the American Independent Party, originally founded for George Wallace's 1968 presidential campaign. The American Independent Party of California has been an affiliate of the Constitution Party since its founding; however, current party leadership is disputed and the issue is in court to resolve this conflict. The Constitution Party has some substantial support from the Christian Right and in 2010 achieved major party status in Colorado. Learn more about Virgil Goode and Constitution Party on Wikipedia.

    Read the article

  • In hindsight, is basing XAML on XML a mistake or a good approach?

    - by romkyns
    XAML is essentially a subset of XML. One of the main benefits of basing XAML on XML is said to be that it can be parsed with existing tools. And it can, to a large degree, although the (syntactically non-trivial) attribute values will stay in text form and require further parsing. There are two major alternatives to describing a GUI in an XML-derived language. One is to do what WinForms did, and describe it in real code. There are numerous problems with this, though it’s not completely advantage-free (a question to compare XAML to this approach). The other major alternative is to design a completely new syntax specifically tailored for the task at hand. This is generally known as a domain-specific language. So, in hindsight, and as a lesson for the future generations, was it a good idea to base XAML on XML, or would it have been better as a custom-designed domain-specific language? If we were designing an even better UI framework, should we pick XML or a custom DSL? Since it’s much easier to think positively about the status quo, especially one that is quite liked by the community, I’ll give some example reasons for why building on top of XML might be considered a mistake. Basing a language off XML has one thing going for it: it’s much easier to parse (the core parser is already available), requires much, much less design work, and alternative parsers are also much easier to write for 3rd party developers. But the resulting language can be unsatisfying in various ways. It is rather verbose. If you change the type of something, you need to change it in the closing tag. It has very poor support for comments; it’s impossible to comment out an attribute. There are limitations placed on the content of attributes by XML. The markup extensions have to be built "on top" of the XML syntax, not integrated deeply and nicely into it. And, my personal favourite, if you set something via an attribute, you use completely different syntax than if you set the exact same thing as a content property. It’s also said that since everyone knows XML, XAML requires less learning. Strictly speaking this is true, but learning the syntax is a tiny fraction of the time spent learning a new UI framework; it’s the framework’s concepts that make the curve steep. Besides, the idiosyncracies of an XML-based language might actually add to the "needs learning" basket. Are these disadvantages outweighted by the ease of parsing? Should the next cool framework continue the tradition, or invest the time to design an awesome DSL that can’t be parsed by existing tools and whose syntax needs to be learned by everyone? P.S. Not everyone confuses XAML and WPF, but some do. XAML is the XML-like thing. WPF is the framework with support for bindings, theming, hardware acceleration and a whole lot of other cool stuff.

    Read the article

  • Top Three Reasons to Move to the Cloud Before Your Next Upgrade

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} 1) Reduced Cost - During major upgrades, most organizations typically need to replace or invest in extra hardware and other IT resources to support the upgrade. With the Cloud, this can become more of an Op-ex discussion. The flexibility and scalability of the cloud also allows for new business solution to be set up more quickly with the ability to scale IT resources to closely map to changing business requirements. . This enables more and faster innovation because you are spending money to focus on core business initiatives instead of setting up complex environments. 2) Reduced Risk- This is especially true when you are working with a cloud provider that possesses substantial in-house expertise. Oracle Managed Cloud Services has been hosting and managing customer’s business applications for over a decade and has help hundreds of customers upgrade and adopt new technologies faster and better. Customer have access to over 15,000 Oracle experts in operation centers around the world that can work around the clock and have direct access Oracle Development to optimize our customers’ upgrade experience. 3) Reduced Downtime - Whether a customer is looking to upgrade their E-Business Suite, PeopleSoft, JD-Edwards, or Fusion applications, we’ve developed standardized best practices and tools across the technology stack to accelerate the upgrade and migration with substantially reduced timelines and risk. And because the process is repeatable, customer stay more current on the latest releases, continuously taking advantage of the newest innovations – without the headache.. By leveraging the economies and expertise of scale that belong to Oracle, you can sleep better at night knowing that your next major application upgrade is taken care of. Check out the video of this Managed Cloud Services customer to learn more about their experience.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Web application development platform recommendation

    - by TK.Maxi
    Hi all I did a year's worth of Pascal, Visual Basic and C++ 15 years ago, so suffice it to say that I'm a complete n00b & lamer when it comes to this. I really do hope that this question doesn't canned, but if it does, please be so kind as to point me in the direction of where it should be posted. I have an idea, like so many others, for a web app. I don't necessarily have the capital to outsource the development of the app right now, and I probably wouldn't want to, since non-disclosure agreements can be expensive to enforce, especially in this day and age of intercontinental outsourcing. I need the app to be usable on any mobile device (eventually), primarily on the major mobile platforms at first, on the web, (pc/mac/*ix) obviously, on mobile web browsers like opera mobile, etc. I envisage the app interacting with the major social networks like fb, orkut, msn im, twitter, et al in a way where friend's are messaged and/or wall posted, a message is posted to the users wall. Geo-location functionality is a plus, considering the service/app can be location sensitive in two ways, 1, the immediate location of the user, 2. the desired location of the user. I'd like to incorporate OpenID sign on, and the flip-side, the service will require that people (service providers) list their specialities/specialisations/interests/areas of expertise, so that matches to user requests can be made by the service, while users' requests are posted into the web universe. I've probably described a glut of apps out there, but I'd appreciate feedback on the sort of platform that I should look at using, be it hosted on something like Google's app engine, or written in android friendly code, or whatever. I'm a firm believer in herd mentality, especially at the start of a project that I have very little experience in. The more opinions, the merrier! I can't get very much more specific, since that would give the idea away. Thanks for your time and I look forward to hearing from wise and experienced and the fresh and innovative alike. Thanks

    Read the article

  • Pseudocode: a clear definition?

    - by Cian E
    The following code is an example of what I think would qualify as pseudocode, since it does not execute in any language but the logic is correct. string checkRubric(gpa, major) bool brake = false num lastRange num rangeCounter string assignment = "unassigned" array bus['business']= array('person a'=>array(0, 2.9), 'person b'=>array(3, 4)) array cis['computer science']= array('person c'=>array(0, 2.9), 'person d'=>array(3, 4)) array lib['english']= array('person e'=>array(0, 4)) array rubric = array(bus, cis, lib) foreach (rubric as fieldAr) foreach (fieldAr as field => advisorAr) if (major == field) foreach (advisorAr as advisor => gpaRangeAr) rangeCounter = 0 foreach (gpaRangeAr as gpaValue) if (rangeCounter < 1) lastRange = gpaValue else if (gpa >= lastRange && gpa <= gpaValue) assignment = advisor brake = true break endif rangeCounter++ endforeach if (brake == true) break endif endforeach if (brake == true) break endif endif endforeach if (brake == true) break endif endforeach return assignment For the past couple of weeks I've been trying to create a clear definition of what pseudocode actually is. Is it relative to the programmer or is there an actual clearcut syntax? I say pseudocode is any code that does not execute, how about you? Thanks (links to this subject welcome)

    Read the article

  • People not respecting good practices at workplace

    - by VexXtreme
    Hi There are some major issues in my company regarding practices, procedures and methodologies. First of all, we're a small firm and there are only 3-4 developers, one of which is our boss who isn't really a programmer, he just chimes in now and then and tries to do code some simple things. The biggest problems are: Major cowboy coding and lack of methodologies. I've tried explaining to everyone the benefits of TDD and unit testing, but I only got weird looks as if I'm talking nonsense. Even the boss gave me the reaction along the lines of "why do we need that? it's just unnecessary overhead and a waste of time". Nobody uses design patterns. I have to tell people not to write business logic in code behind, I have to remind them not to hardcode concrete implementations and dependencies into classes and cetera. I often feel like a nazi because of this and people think I'm enforcing unnecessary policies and use of design patterns. The biggest problem of all is that people don't even respect common sense security policies. I've noticed that college students who work on tech support use our continuous integration and source control server as a dump to store their music, videos, series they download from torrents and so on. You can imagine the horror when I realized that most of the partition reserved for source control backups was used by entire seasons of TV series and movies. Our development server isn't even connected to an UPS and surge protection. It's just plugged straight into the wall outlet. I asked the boss to buy surge protection, but he said it's unnecessary. All in all, I like working here because the atmosphere is very relaxed, money is good and we're all like a family (so don't advise me to quit), but I simply don't know how to explain to people that they need to stick to some standards and good practices in IT industry and that they can't behave so irresponsibly. Thanks for the advice

    Read the article

  • Experience migrating legacy Cobol/PL1 to Java

    - by MadMurf
    ORIGINAL Q: I'm wondering if anyone has had experience of migrating a large Cobol/PL1 codebase to Java? How automated was the process and how maintainable was the output? How did the move from transactional to OO work out? Any lessons learned along the way or resources/white papers that may be of benefit would be appreciated. EDIT 7/7: Certainly the NACA approach is interesting, the ability to continue making your BAU changes to the COBOL code right up to the point of releasing the JAVA version has merit for any organization. The argument for procedural Java in the same layout as the COBOL to give the coders a sense of comfort while familiarizing with the Java language is a valid argument for a large organisation with a large code base. As @Didier points out the $3mil annual saving gives scope for generous padding on any BAU changes going forward to refactor the code on an ongoing basis. As he puts it if you care about your people you find a way to keep them happy while gradually challenging them. The problem as I see it with the suggestion from @duffymo to Best to try and really understand the problem at its roots and re-express it as an object-oriented system is that if you have any BAU changes ongoing then during the LONG project lifetime of coding your new OO system you end up coding & testing changes on the double. That is a major benefit of the NACA approach. I've had some experience of migrating Client-Server applications to a web implementation and this was one of the major issues we encountered, constantly shifting requirements due to BAU changes. It made PM & scheduling a real challenge. Thanks to @hhafez who's experience is nicely put as "similar but slightly different" and has had a reasonably satisfactory experience of an automatic code migration from Ada to Java. Thanks @Didier for contributing, I'm still studying your approach and if I have any Q's I'll drop you a line.

    Read the article

  • Eclipse PDE - Plug-in, Feature, and Product Versioning

    - by Michael
    I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace. I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself. However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)? For example, assume we have the typical "Hello World" setup as follows: Plug-in: com.example.helloworld, version 1.0.0 Feature: com.example.helloworld.feature, version 1.0.0 Product: com.example.helloworld.product, version 1.0.0 If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)? I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.

    Read the article

  • How do you dynamically allocate a contiguous 3D array in C?

    - by Derek
    In C, I want to loop through an array in this order for(int z = 0; z < NZ; z++) for(int x = 0; x < NX; x++) for(int y = 0; y < NY; y++) 3Darray[x][y][z] = 100; How do I create this array in such a way that 3Darray[0][1][0] comes right before 3Darray[0][2][0] in memory? I can get an initialization to work that gives me "z-major" ordering, but I really want a y-major ordering for this 3d array This is the code I have been trying to use: char *space; char ***Arr3D; int y, z; ptrdiff_t diff; space = malloc(X_DIM * Y_DIM * Z_DIM * sizeof(char)) Arr3D = malloc(Z_DIM * sizeof(char **)); for (z = 0; z < Z_DIM; z++) { Arr3D[z] = malloc(Y_DIM * sizeof(char *)); for (y = 0; y < Y_DIM; y++) { Arr3D[z][y] = space + (z*(X_DIM * Y_DIM) + y*X_DIM); } }

    Read the article

  • What are the attack vectors for passwords sent over http?

    - by KevinM
    I am trying to convince a customer to pay for SSL for a web site that requires login. I want to make sure I correctly understand the major scenarios in which someone can see the passwords that are being sent. My understanding is that at any of the hops along the way can use a packet analyzer to view what is being sent. This seems to require that any hacker (or their malware/botnet) be on the same subnet as any of the hops the packet takes to arrive at its destination. Is that right? Assuming some flavor of this subnet requirement holds true, do I need to worry about all the hops or just the first one? The first one I can obviously worry about if they're on a public Wifi network since anyone could be listening in. Should I be worried about what's going on in subnets that packets will travel across outside this? I don't know a ton about network traffic, but I would assume it's flowing through data centers of major carriers and there's not a lot of juicy attack vectors there, but please correct me if I am wrong. Are there other vectors to be worried about outside of someone listening with a packet analyzer? I am a networking and security noob, so please feel free to set me straight if I am using the wrong terminology in any of this.

    Read the article

  • Regexp that matches user-agents of end-user browsers but NOT crawlers with >90 % accuracy

    - by knorv
    I'm trying to construct a regexp that will evaluate to true for User-Agent:s of "browsers navigated by humans", but false for bots. Needless to say the matching will not be exact, but if it gets things right in say 90 % of cases that is more than good enough. My approach so far is to target the User-Agent string of the the five major desktop browsers (MSIE, Firefox, Chrome, Safari, Opera). Specifically I want the regexp NOT to match if the user-agent is a bot (Googlebot, msnbot, etc.). Currently I'm using the following regexp which appears to achieve the desired precision: ^(Mozilla.*(Gecko|KHTML|MSIE|Presto|Trident)|Opera).*$ I've observed small number of false negatives which are mostly mobile browsers. The exceptions all match: (BlackBerry|HTC|LG|MOT|Nokia|NOKIAN|PLAYSTATION|PSP|SAMSUNG|SonyEricsson) My question is: Given the desired accuracy level, how would you improve the regexp? Can you think of any major false positives or false negatives to the given regexp? Please note that the question is specifically about regexp-based User-Agent matching. There are a bunch of other approaches to solving this problem, but those are out of the scope of this question.

    Read the article

  • How can I improve this search usability?

    - by Craig Whitley
    This is the first real programming attempt of mine, and theres some major flaws. It's a learning project, and I'm currently re-writing the entire thing as my php is is really messy. I really want to get an idea on how I can improve the actual usability and accesibility of the site at the same time though - so I know how to implement it correctly. The website is basically a comparison website for gameserver hosting. As I mentioned, its a learning project and I don't actually expect any revenue from it. At the moment theres only test data in it, so in the game input box select either 'Battlefied Bad Company 2' or 'Call of Duty 4: Modern Warfare' and ignore the actual search results. http://www.laglessfrag.com I wasn't really sure how to work the search functionality. Basically when you click a game in the drop down box, it sends an ajax request and finds all the locations available to that specific game in the database. After selecting the country theres another ajax to find all the citys available to the game in that country - which gives me the two unique identifiers I need to create the search results. One major and fundamental flaw is that without javascript enabled, the site ceases to function. I'll overcome that in the next re-write, but without the ajax functionality stopping the user 'going wrong' - how can I implement a search that requires two fields without creating extra steps in new pages after form submissions? I'm also no designer so my whole layout and css is a bit rubbish, but this was mainly a learning project as I'm interested in applications / programming rather than design. It's also slow as its on shared hosting, but if I can get it to work correctly then I'm not impartial to chucking a bit of money at it for faster hosting and maybe a bit of advertising and seeing where it goes (if anywhere!). Any info appreciated.

    Read the article

  • E-Commerce Security: Only Credit Card Fields Encrypted?!

    - by bizarreunprofessionalanddangerous
    I'd like your opinions on how a major bricks-and-mortar company is running the security for its shopping Web site. After a recent update, when you are logged into your shopping account, the session is now not secured. No 'https', no browser 'lock'. All the personal contact info, shopping history -- and if I'm not mistaken submit and change password -- are being sent unencrypted. There is a small frame around the credit card fields that is https. There's a little notice: "Our website is secure. Our website uses frames and because of this the secure icon will not appear in your browser" On top of this the most prominent login fields for the site are broken, and haven't gotten fixed for a week or longer (giving the distinct impression they have no clue what's going on and can't be trusted with anything). Now is it just me -- or is this simply incomprehensible for a billion dollar company, significant shopping site, in the year 2010. No lock. "We use frames" (maybe they forget "Best viewed in IE4"). Customers complaining, as you can see from their FAQ "explaining" why you aren't seeing https. I'm getting nowhere trying to convince customer service that they REALLY need to do something about this, and am about to head for the CEO. But I just want to make sure this is as BIZARRE and unprofessional and dangerous a situation as I think it is. (I'm trying to visualize what their Web technical team consists of. I'm getting A) some customer service reps who were given a 3 hour training course on Web site maintenance, B) a 14 year old boy in his bedroom masquerading as a major technical services company, C) a guy in a hut in a jungle with an e-commerce book from 1996.)

    Read the article

  • Starting with NHibernate

    - by George
    I'm having major difficulties to start off with NHiberante. Main problems: Where my hbm.xml files should reside? I create a Mappings folder but I received an error "Could not find xxx.hbm.xml file." I tried to load the specific class through the dialect cf.AddClass(typeof(xxx)); but it still gives me the same error (the files are marked as embebed resources. Also I'm having major problems in connection to it. I stopped trying to use the cfg xml file and tried a more direct approach with a library I have here. Configuration cfg = new Configuration(); cfg.AddClass(typeof(Tag)); ISessionFactory sessions = cfg.BuildSessionFactory(); AgnosticConnectionHandler agch = new AgnosticConnectionHandler("xxx","xxx","geo_biblio","localhost", 5432,DatabaseInstance.PostgreSQL); ISession sessao = sessions.OpenSession(agch.GetConnection); ITransaction tx = sessao.BeginTransaction(); Tag tag1 = new Tag(); tag1.NomeTag = "Teste Tag NHibernate!!!"; sessao.Save(tag1); tx.Commit(); sessao.Close(); Any tips for me? I'm getting the exception in line 2 of this code, and still not sure what to do. Any help is appreciated. Thanks

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >