Search Results

Search found 1366 results on 55 pages for 'complexity'.

Page 14/55 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Backing up my Windows Home Server to the Cloud&hellip;

    - by eddraper
    Ok, here’s my scenario: Windows Home Server with a little over 3TB of storage.  This includes many years of our home network’s PC backups, music, videos, etcetera. I’d like to get a backup off-site, and the existing APIs and apps such as CloudBerry Labs WHS Backup service are making it easy.  Now, all it’s down to is vendor and the cost of the actual storage.   So,  I thought I’d take a lazy Saturday morning and do some research on this and get the ball rolling.  What I discovered stunned me…   First off, the pricing for just about everything was loaded with complexity.  I learned that it wasn’t just about storage… it was about network usage, requests, sites, replication, and on and on. I really don’t see this as rocket science.  I have a disk image.  I want to put it in the cloud.  I’m not going to be be using it but once daily for incremental backups.  Sounds like a common scenario.  Yes, if “things get real” and my server goes down, I will need to bring down a lot of data and utilize a fair amount of vendor infrastructure.  However, this may never happen.  Offsite storage is an insurance policy.   The complexity of the cost structures, perhaps by design, create an environment where it’s incredibly hard to model bottom line costs and compare vendor all-up pricing.  As it is a “lazy Saturday morning,” I’m not in the mood for such antics and I decide to shirk the endeavor entirely.  Thus, I decided to simply fire up calc.exe and do some a simple arithmetic model based on price per GB.  I shuddered at the results.  Certainly something was wrong… did I misplace a decimal point?  Then I discovered CloudBerry’s own calculator.   Nope, I hadn’t misplaced those decimals after all.  Check it out (pricing based on 3174 GB):   Amazon S3 $398.00 per month $4761 per year Azure $396.75 per month $4761 per year Google $380.88 per month $4570.56 per year   Conclusion: Rampant crack smoking at vendors.  Seriously.  Out. Of. Their. Minds. Now, to Amazon’s credit, vision, and outright common sense, they had one offering which directly addresses my scenario:   Amazon Glacier $31.74 per month $380.88 per year   hmmm… It’s on the table.  Let’s see what it would cost to just buy some drives, an enclosure and cart them over to a friend’s house.   2 x 2TB Drives from NewEgg.com $199.99   Enclosure $39.99     $239.98   Carting data to back and forth to friend’s within walking distance pain   Leave drive unplugged at friend’s $0 for electricity   Possible data loss No way I can come and go every day.     I think I’ll think on this a bit more…

    Read the article

  • Unleash the Power of Cryptography on SPARC T4

    - by B.Koch
    by Rob Ludeman Oracle’s SPARC T4 systems are architected to deliver enhanced value for customer via the inclusion of many integrated features.  One of the best examples of this approach is demonstrated in the on-chip cryptographic support that delivers wire speed encryption capabilities without any impact to application performance.  The Evolution of SPARC Encryption SPARC T-Series systems have a long history of providing this capability, dating back to the release of the first T2000 systems that featured support for on-chip RSA encryption directly in the UltraSPARC T1 processor.  Successive generations have built on this approach by support for additional encryption ciphers that are tightly coupled with the Oracle Solaris 10 and Solaris 11 encryption framework.  While earlier versions of this technology were implemented using co-processors, the SPARC T4 was redesigned with new crypto instructions to eliminate some of the performance overhead associated with the former approach, resulting in much higher performance for encrypted workloads. The Superiority of the SPARC T4 Approach to Crypto As companies continue to engage in more and more e-commerce, the need to provide greater degrees of security for these transactions is more critical than ever before.  Traditional methods of securing data in transit by applications have a number of drawbacks that are addressed by the SPARC T4 cryptographic approach. 1. Performance degradation – cryptography is highly compute intensive and therefore, there is a significant cost when using other architectures without embedded crypto functionality.  This performance penalty impacts the entire system, slowing down performance of web servers (SSL), for example, and potentially bogging down the speed of other business applications.  The SPARC T4 processor enables customers to deliver high levels of security to internal and external customers while not incurring an impact to overall SLAs in their IT environment. 2. Added cost – one of the methods to avoid performance degradation is the addition of add-in cryptographic accelerator cards or external offload engines in other systems.  While these solutions provide a brute force mechanism to avoid the problem of slower system performance, it usually comes at an added cost.  Customers looking to encrypt datacenter traffic without the overhead and expenditure of extra hardware can rely on SPARC T4 systems to deliver the performance necessary without the need to purchase other hardware or add-on cards. 3. Higher complexity – the addition of cryptographic cards or leveraging load balancers to perform encryption tasks results in added complexity from a management standpoint.  With SPARC T4, encryption keys and the framework built into Solaris 10 and 11 means that administrators generally don’t need to spend extra cycles determining how to perform cryptographic functions.  In fact, many of the instructions are built-in and require no user intervention to be utilized.  For example, For OpenSSL on Solaris 11, SPARC T4 crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "t4 engine."  For a deeper technical dive into the new instructions included in SPARC T4, consult Dan Anderson’s blog. Conclusion In summary, SPARC T4 systems offer customers much more value for applications than just increased performance. The integration of key virtualization technologies, embedded encryption, and a true Enterprise Operating System, Oracle Solaris, provides direct business benefits that supersedes the commodity approach to data center computing.   SPARC T4 removes the roadblocks to secure computing by offering integrated crypto accelerators that can save IT organizations in operating cost while delivering higher levels of performance and meeting objectives around compliance. For more on the SPARC T4 family of products, go to here.

    Read the article

  • Measuring Code Quality

    - by DotNetBlues
    Several months back, I was tasked with measuring the quality of code in my organization. Foolishly, I said, "No problem." I figured that Visual Studio has a built-in code metrics tool (Analyze -> Calculate Code Metrics) and that would be a fine place to start with. I was right, but also very wrong. The Visual Studio calculates five primary metrics: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling, and Lines of Code. The first two are figured at the method level, the second at (primarily) the class level, and the last is a simple count. The first question any reasonable person should ask is "Which one do I look at first?" The first question any manager is going to ask is, "What one number tells me about the whole application?" My answer to both, in a way, was "Maintainability Index." Why? Because each of the other numbers represent one element of quality while MI is a composite number that includes Cyclomatic Complexity. I'd be lying if I said no consideration was given to the fact that it was abstract enough that it's harder for some surly developer (I've been known to resemble that remark) to start arguing why a high coupling or inheritance is no big deal or how complex requirements are to blame for complex code. I should also note that I don't think there is one magic bullet metric that will tell you objectively how good a code base is. There are a ton of different metrics out there, and each one was created for a specific purpose in mind and has a pet theory behind it. When you've got a group of developers who aren't accustomed to measuring code quality, picking a 0-100 scale, non-controversial metric that can be easily generated by tools you already own really isn't a bad place to start. That sort of answers the question a developer would ask, but what about the management question; how do you dashboard this stuff when Visual Studio doesn't roll up the numbers to the solution level? Since VS does roll up the MI to the project level, I thought I could just figure out what sort of weighting Microsoft used to roll method scores up to the class level and then to the namespace and project levels. I was a bit surprised by the answer: there is no weighting. That means that a class with one 1300 line method (which will score a 0 MI) and one empty constructor (which will score a 100 MI) will have an overall MI of a respectable 50. Throw in a couple of DTOs that are nothing more than getters and setters (which tend to score 95 or better) and the project ends up looking really, really healthy. The next poor bastard who has to work on the application is probably not going to be singing the praises of its maintainability, though. For the record, that 1300 line method isn't a hypothetical, either. So, what does one do with that? Well, I decided to weight the average by the Lines of Code per method. For our above example, the formula for the class's MI becomes ((1300 * 0) + (1 * 100))/1301 = .077, rounded to 0. Sounds about right. Continue the pattern for namespace, project, solution, and even multi-solution application MI scores. This can be done relatively easily by using the "export to Excel" button and running a quick formula against the data. On the short list of follow-up questions would be, "How do I improve my application's score?" That's an answer for another time, though.

    Read the article

  • C# 5 Async, Part 2: Asynchrony Today

    - by Reed
    The .NET Framework has always supported asynchronous operations.  However, different mechanisms for supporting exist throughout the framework.  While there are at least three separate asynchronous patterns used through the framework, only the latest is directly usable with the new Visual Studio Async CTP.  Before delving into details on the new features, I will talk about existing asynchronous code, and demonstrate how to adapt it for use with the new pattern. The first asynchronous pattern used in the .NET framework was the Asynchronous Programming Model (APM).  This pattern was based around callbacks.  A method is used to start the operation.  It typically is named as BeginSomeOperation.  This method is passed a callback defined as an AsyncCallback, and returns an object that implements IAsyncResult.  Later, the IAsyncResult is used in a call to a method named EndSomeOperation, which blocks until completion and returns the value normally directly returned from the synchronous version of the operation.  Often, the EndSomeOperation call would be called from the callback function passed, which allows you to write code that never blocks. While this pattern works perfectly to prevent blocking, it can make quite confusing code, and be difficult to implement.  For example, the sample code provided for FileStream’s BeginRead/EndRead methods is not simple to understand.  In addition, implementing your own asynchronous methods requires creating an entire class just to implement the IAsyncResult. Given the complexity of the APM, other options have been introduced in later versions of the framework.  The next major pattern introduced was the Event-based Asynchronous Pattern (EAP).  This provides a simpler pattern for asynchronous operations.  It works by providing a method typically named SomeOperationAsync, which signals its completion via an event typically named SomeOperationCompleted. The EAP provides a simpler model for asynchronous programming.  It is much easier to understand and use, and far simpler to implement.  Instead of requiring a custom class and callbacks, the standard event mechanism in C# is used directly.  For example, the WebClient class uses this extensively.  A method is used, such as DownloadDataAsync, and the results are returned via the DownloadDataCompleted event. While the EAP is far simpler to understand and use than the APM, it is still not ideal.  By separating your code into method calls and event handlers, the logic of your program gets more complex.  It also typically loses the ability to block until the result is received, which is often useful.  Blocking often requires writing the code to block by hand, which is error prone and adds complexity. As a result, .NET 4 introduced a third major pattern for asynchronous programming.  The Task<T> class introduced a new, simpler concept for asynchrony.  Task and Task<T> effectively represent an operation that will complete at some point in the future.  This is a perfect model for thinking about asynchronous code, and is the preferred model for all new code going forward.  Task and Task<T> provide all of the advantages of both the APM and the EAP models – you have the ability to block on results (via Task.Wait() or Task<T>.Result), and you can stay completely asynchronous via the use of Task Continuations.  In addition, the Task class provides a new model for task composition and error and cancelation handling.  This is a far superior option to the previous asynchronous patterns. The Visual Studio Async CTP extends the Task based asynchronous model, allowing it to be used in a much simpler manner.  However, it requires the use of Task and Task<T> for all operations.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for September 9-15, 2012

    - by Bob Rhubart
    The Top 10 most-viewed items shared on the OTN ArchBeat Facebook page for the week of September 9-15, 2017. 15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Attend OTN Architect Day – by Architects, for Architects – October 25 You won't need 3D glasses to take in these live presentations (8 sessions, two tracks) on Cloud computing, SOA, and engineered systems. And the ticket price is: Zero. Nothing. Absolutely free. Register now for Oracle Technology Network Architect Day in Los Angeles. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles , 8555 Beverly Boulevard , Los Angeles, CA 90048. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions—inside and outside the enterprise." Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle IAM 11g R2 docs are now available "One of the great things about the new doc set is the inclusion of ePub files," says Fusion Middleware A-Team blogger Chris Johnson. "This means that if you have an iPad you can load up the doc library onto that and read the docs on the couch." Setting up a local Yum Server using the Exalogic ZFS Storage Appliance | Donald A concise technical post from the man named Donald. What's New in Oracle VM VirtualBox 4.2? | The Fat Bloke Sings "One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's," says The Fat Bloke. "Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends." The new VirtualBox release, a year in the making, addresses the needs of these users, he explains. Configuring Oracle Business Intelligence 11g MDS XML Source Control Management with Git Version Control | Christian Screen Oracle ACE Christian Screen developed this tutorial for those interested in learning how to configure the Oracle Business Intelligence 11g (11.1.1.6) metadata repository for development using the new MDS XML source control management functionality. Identity and Access Management at Oracle Open World 2012 | Brian Eidelman Fusion Middleware A-Team blogger Brian Eideleman highlights three Oracle Openworld sessions that will put Identity and Access Management in the spotlight, and shares a link to the "Focus On: Identity Management" document, a comprehensive listing of Openworld activities also dealing with IM. Starting and stopping WebLogic automatically using Upstart | Chris Johnson "In Ubuntu, RedHat and Oracle Linux there's a new flavor of init called Upstart that all the kids are using," says Oracle Fusion Middleware A-Team member Chris Johnson. "It's the new hotness when it comes to making programs into daemons and wiring them to start and stop at appropriate times." Thought for the Day "The purpose of software engineering is to control complexity, not to create it." — Pamela Zave Source: SoftwareQuotes.com

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • Improving the running time of Breadth First Search and Adjacency List creation

    - by user45957
    We are given an array of integers where all elements are between 0-9. have to start from the 1st position and reach end in minimum no of moves such that we can from an index i move 1 position back and forward i.e i-1 and i+1 and jump to any index having the same value as index i. Time Limit : 1 second Max input size : 100000 I have tried to solve this problem use a single source shortest path approach using Breadth First Search and though BFS itself is O(V+E) and runs in time the adjacency list creation takes O(n2) time and therefore overall complexity becomes O(n2). is there any way i can decrease the time complexity of adjacency list creation? or is there a better and more efficient way of solving the problem? int main(){ vector<int> v; string str; vector<int> sets[10]; cin>>str; int in; for(int i=0;i<str.length();i++){ in=str[i]-'0'; v.push_back(in); sets[in].push_back(i); } int n=v.size(); if(n==1){ cout<<"0\n"; return 0; } if(v[0]==v[n-1]){ cout<<"1\n"; return 0; } vector<int> adj[100001]; for(int i=0;i<10;i++){ for(int j=0;j<sets[i].size();j++){ if(sets[i][j]>0) adj[sets[i][j]].push_back(sets[i][j]-1); if(sets[i][j]<n-1) adj[sets[i][j]].push_back(sets[i][j]+1); for(int k=j+1;k<sets[i].size();k++){ if(abs(sets[i][j]-sets[i][k])!=1){ adj[sets[i][j]].push_back(sets[i][k]); adj[sets[i][k]].push_back(sets[i][j]); } } } } queue<int> q; q.push(0); int dist[100001]; bool visited[100001]={false}; dist[0]=0; visited[0]=true; int c=0; while(!q.empty()){ int dq=q.front(); q.pop(); c++; for(int i=0;i<adj[dq].size();i++){ if(visited[adj[dq][i]]==false){ dist[adj[dq][i]]=dist[dq]+1; visited[adj[dq][i]]=true; q.push(adj[dq][i]); } } } cout<<dist[n-1]<<"\n"; return 0; }

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 1/ 2

    - by Sanjeevio
    Financial institutions view compliance as a regulatory burden that incurs a high initial capital outlay and recurring costs. By its very nature regulation takes a prescriptive, common-for-all, approach to managing financial and non-financial risk. Needless to say, no longer does mere compliance with regulation will lead to sustainable differentiation.  Genuine competitive advantage will stem from being able to cope with innovation demands of the present economic environment while meeting compliance goals with regulatory mandates in a faster and cost-efficient manner. Let’s first take a look at the key factors that are limiting the pursuit of the above goal. Regulatory requirements are growing, driven in-part by revisions to existing mandates in line with cross-border, pan-geographic, nature of financial value chains today and more so by frequent systemic failures that have destabilized the financial markets and the global economy over the last decade.  In addition to the increase in regulation, financial institutions are faced with pressures of regulatory overlap and regulatory conflict. Regulatory overlap arises primarily from two things: firstly, due to the blurring of boundaries between lines-of-businesses with complex organizational structures and secondly, due to varying requirements of jurisdictional directives across geographic boundaries e.g. a securities firm with operations in US and EU would be subject different requirements of “Know-Your-Customer” (KYC) as per the PATRIOT ACT in US and MiFiD in EU. Another consequence and concomitance of regulatory change is regulatory conflict, which again, arises primarily from two things: firstly, due to diametrically opposite priorities of line-of-business and secondly, due to tension that regulatory requirements create between shareholders interests of tighter due-diligence and customer concerns of privacy. For instance, Customer Due Diligence (CDD) as per KYC requires eliciting detailed information from customers to prevent illegal activities such as money-laundering, terrorist financing or identity theft. While new customers are still more likely to comply with such stringent background checks at time of account opening, existing customers baulk at such practices as a breach of trust and privacy. As mentioned earlier regulatory compliance addresses both financial and non-financial risks. Operational risk is a non-financial risk that stems from business execution and spans people, processes, systems and information. Operational risk arising from financial processes in particular transcends other sources of such risk. Let’s look at the factors underpinning the operational risk of financial processes. The rapid pace of innovation and geographic expansion of financial institutions has resulted in proliferation and ad-hoc evolution of back-office, mid-office and front-office processes. This has had two serious implications on increasing the operational risk of financial processes: ·         Inconsistency of processes across lines-of-business, customer channels and product/service offerings. This makes it harder for the risk function to enforce a standardized risk methodology and in turn breaches harder to detect. ·         The proliferation of processes coupled with increasingly frequent change-cycles has resulted in accidental breaches and increased vulnerability to regulatory inadequacies. In summary, regulatory growth (including overlap and conflict) coupled with process proliferation and inconsistency is driving process compliance complexity In my next post I will address the implications of this process complexity on financial institutions and outline the role of BPM in lowering specific aspects of operational risk of financial processes.

    Read the article

  • The Healthy Tension That Mobility Creates

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps In my previous post, I talked about the value of the mobile revolution on businesses and workers. Now let me put on a different hat and view the world from the IT department and the IT leader’s viewpoint. The IT leader has different concerns – around privacy, potential liability of information leakage, and intellectual property protection. These concerns and the leader’s goals create a healthy tension with the users. For example, effective device management becomes a must have for the IT leader, especially if you look at the Android ecosystem as an example. There are benefits to the Android strategy, but there are also drawbacks, such as uniformity – in device management, in operating systems, and in the application taxonomy and capabilities. Whereas, if you compare Android to iOS, Apple's operating system, iOS is more unified, more streamlined, and easier to manage. In either case, this is where mobile device management in the cloud makes good sense. I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service and I predict it's going to be key for our customers. A New Focus for IT Departments So where does that leave the IT departments? I think their futures are in governance, which is a more strategic play than a tactical one. Device management is tactical and it's the “now” topic. But the mobile phenomenon, if you will, is going to drive significant change in terms of how IT plans, hosts, and deploys enterprise applications. For example, opening up enterprise applications for mobile users presents some challenges unless you deploy more complicated network topologies, such as virtual private networks and threat protection technology. If you really want employees to be mobile you need to remove those kinds of barriers. But I don’t think IT departments want to wrestle with exposing their private enterprise data centers and being responsible for hosted business applications – applications in a sense that they’re making vulnerable to the public world. This opens up a significant need and a significant driver for cloud applications. However, it's not just about taking away the complexity – it's also about taking away the responsibility. Why should every business have to carry the responsibility and figure out all the nuts and bolts of how to protect themselves in this public, mobile world? When you use apps in the cloud, either your vendor or your hosting partner should have figured all that out. They need to assure the business that they are adhering to all sorts of security and compliance regulations so users can be connected and have access to information anywhere anytime. More Ideas and Better Service What’s more interesting is the world of possibilities that the connected, cloud-based world enables. I believe that the one-size-fits-all, uber-best practices, lowest-common denominator-like capabilities will go away. IT will now be able to solve very specific business challenges for the different corporate functions it serves. In this new world, IT will play a key role in enabling different organizations within a company to be best in class and delivering greater value to the line of business managers. IT will actually help to differentiate. Net result is a more agile workforce and business because each department is getting work done its own way.

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • Faster Trip to Innovation with Simplified Data Integration: Sabre Holdings Case Study

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Author: Irem Radzik, Director of Product Marketing, Data Integration, Oracle In today’s fast-paced, competitive environment, IT teams are under pressure to deliver technology solutions for many critical business initiatives as fast as possible. When the focus is on speed, it can be easy to continue to use old style, point-to-point custom scripts that grow organically to the point where they are unmanageable and too costly to maintain. As data volumes, data sources, and end users grow, uncoordinated data integration efforts create significant inefficiencies for both IT and business users. In addition to losing IT productivity due to maintaining spaghetti architecture, data integrity becomes a concern as well. Errors caused by inconsistent, data and manual data entry can prove very costly for companies and disrupt business activities. Many industry leaders recognize now that data should be moved in an automated and reliable manner across all platforms to have one version of the truth. By simplifying their data integration architecture and standardizing on a centralized approach, IT teams now accelerate time to market. Especially, using a centralized, shared-service approach brings agility, increases IT productivity, and frees up resources for innovation. One such industry leader that simplified its data integration architecture is Sabre Holdings. Sabre Holdings provides distribution and technology solutions for the travel industry, and is a winner of Oracle Excellence Awards for Fusion Middleware in 2011 in the data integration category. I had the pleasure to host Sabre Holdings on a public webcast and discuss their data integration best practices for data warehousing. In this webcast Sabre’s Amjad Saeed, presented how the company reduced complexity by consolidating systems and standardizing development on Oracle Data Integrator and Oracle GoldenGate for its global data warehouse development team. With Oracle’s complete real-time data integration solution, Sabre also streamlined support and maintenance operations, achieved real-time view in the execution of the integration processes, and can manage the data warehouse and business intelligence solution performance on demand. By reducing complexity and leveraging timely market insights, the company was able to decrease time to market by 40%. You can now listen to the webcast on demand: Sabre Holdings Case Study: Accelerating Innovation using Oracle Data Integration I invite you to hear directly from Sabre how to use advanced data integration capabilities to enable accelerated innovation. To learn more about Oracle’s data integration offering you can download our free resources.

    Read the article

  • Dependency injection: How to sell it

    - by Mel
    Let it be known that I am a big fan of dependency injection (DI) and automated testing. I could talk all day about it. Background Recently, our team just got this big project that is to built from scratch. It is a strategic application with complex business requirements. Of course, I wanted it to be nice and clean, which for me meant: maintainable and testable. So I wanted to use DI. Resistance The problem was in our team, DI is taboo. It has been brought up a few times, but the gods do not approve. But that did not discourage me. My Move This may sound weird but third-party libraries are usually not approved by our architect team (think: "thou shalt not speak of Unity, Ninject, NHibernate, Moq or NUnit, lest I cut your finger"). So instead of using an established DI container, I wrote an extremely simple container. It basically wired up all your dependencies on startup, injects any dependencies (constructor/property) and disposed any disposable objects at the end of the web request. It was extremely lightweight and just did what we needed. And then I asked them to review it. The Response Well, to make it short. I was met with heavy resistance. The main argument was, "We don't need to add this layer of complexity to an already complex project". Also, "It's not like we will be plugging in different implementations of components". And "We want to keep it simple, if possible just stuff everything into one assembly. DI is an uneeded complexity with no benefit". Finally, My Question How would you handle my situation? I am not good in presenting my ideas, and I would like to know how people would present their argument. Of course, I am assuming that like me, you prefer to use DI. If you don't agree, please do say why so I can see the other side of the coin. It would be really interesting to see the point of view of someone who disagrees. Update Thank you for everyone's answers. It really puts things into perspective. It's nice enough to have another set of eyes to give you feedback, fifteen is really awesome! This are really great answers and helped me see the issue from different sides, but I can only choose one answer, so I will just pick the top voted one. Thanks everyone for taking the time to answer. I have decided that it is probably not the best time to implement DI, and we are not ready for it. Instead, I will concentrate my efforts on making the design testable and attempt to present automated unit testing. I am aware that writing tests is additional overhead and if ever it is decided that the additional overhead is not worth it, personally I would still see it as a win situation since the design is still testable. And if ever testing or DI is a choice in future, the design can easily handle it.

    Read the article

  • Is this over-abstraction? (And is there a name for it?)

    - by mwhite
    I work on a large Django application that uses CouchDB as a database and couchdbkit for mapping CouchDB documents to objects in Python, similar to Django's default ORM. It has dozens of model classes and a hundred or two CouchDB views. The application allows users to register a "domain", which gives them a unique URL containing the domain name that gives them access to a project whose data has no overlap with the data of other domains. Each document that is part of a domain has its domain property set to that domain's name. As far as relationships between the documents go, all domains are effectively mutually exclusive subsets of the data, except for a few edge cases (some users can be members of more than one domain, and there are some administrative reports that include all domains, etc.). The code is full of explicit references to the domain name, and I'm wondering if it would be worth the added complexity to abstract this out. I'd also like to know if there's a name for the sort of bound property approach I'm taking here. Basically, I have something like this in mind: Before in models.py class User(Document): domain = StringProperty() class Group(Document): domain = StringProperty() name = StringProperty() user_ids = StringListProperty() # method that returns related document set def users(self): return [User.get(id) for id in self.user_ids] # method that queries a couch view optimized for a specific lookup @classmethod def by_name(cls, domain, name): # the view method is provided by couchdbkit and handles # wrapping json CouchDB results as Python objects, and # can take various parameters modifying behavior return cls.view('groups/by_name', key=[domain, name]) # method that creates a related document def get_new_user(self): user = User(domain=self.domain) user.save() self.user_ids.append(user._id) return user in views.py: from models import User, Group # there are tons of views like this, (request, domain, ...) def create_new_user_in_group(request, domain, group_name): group = Group.by_name(domain, group_name)[0] user = User(domain=domain) user.save() group.user_ids.append(user._id) group.save() in group/by_name/map.js: function (doc) { if (doc.doc_type == "Group") { emit([doc.domain, doc.name], null); } } After models.py class DomainDocument(Document): domain = StringProperty() @classmethod def domain_view(cls, *args, **kwargs): kwargs['key'] = [cls.domain.default] + kwargs['key'] return super(DomainDocument, cls).view(*args, **kwargs) @classmethod def get(cls, *args, **kwargs, validate_domain=True): ret = super(DomainDocument, cls).get(*args, **kwargs) if validate_domain and ret.domain != cls.domain.default: raise Exception() return ret def models(self): # a mapping of all models in the application. accessing one returns the equivalent of class BoundUser(User): domain = StringProperty(default=self.domain) class User(DomainDocument): pass class Group(DomainDocument): name = StringProperty() user_ids = StringListProperty() def users(self): return [self.models.User.get(id) for id in self.user_ids] @classmethod def by_name(cls, name): return cls.domain_view('groups/by_name', key=[name]) def get_new_user(self): user = self.models.User() user.save() views.py @domain_view # decorator that sets request.models to the same sort of object that is returned by DomainDocument.models and removes the domain argument from the URL router def create_new_user_in_group(request, group_name): group = request.models.Group.by_name(group_name) user = request.models.User() user.save() group.user_ids.append(user._id) group.save() (Might be better to leave the abstraction leaky here in order to avoid having to deal with a couchapp-style //! include of a wrapper for emit that prepends doc.domain to the key or some other similar solution.) function (doc) { if (doc.doc_type == "Group") { emit([doc.name], null); } } Pros and Cons So what are the pros and cons of this? Pros: DRYer prevents you from creating related documents but forgetting to set the domain. prevents you from accidentally writing a django view - couch view execution path that leads to a security breach doesn't prevent you from accessing underlying self.domain and normal Document.view() method potentially gets rid of the need for a lot of sanity checks verifying whether two documents whose domains we expect to be equal are. Cons: adds some complexity hides what's really happening requires no model modules to have classes with the same name, or you would need to add sub-attributes to self.models for modules. However, requiring project-wide unique class names for models should actually be fine because they correspond to the doc_type property couchdbkit uses to decide which class to instantiate them as, which should be unique. removes explicit dependency documentation (from group.models import Group)

    Read the article

  • C++ behavior of for loops vs. while loops

    - by kjh
    As far as I understand, when you write a for-loop similar to this one for (int i = 0; i < SOME_NUM; i++) { if (true) do_something(); else do_something_else(); } The time complexity of this operation is mostly affected by the if (true) statement because the for-loop iterations don't actually involve any comparisons of i to SOME_NUM, the compiler will just essentially run the code inside the for-loop SOME_NUM times. Please correct me if I am wrong. However if this is correct, then how do the following nested for-loops behave? for (int i = 0; i < SOME_NUM; i++) { for (int j = 0; j < i; j++) { do_something(); } } The j in the inner for-loop is now upper bound by i, a value that changes every time the loop restarts. How will the compiler compile this? Do these nested for-loops essentially behave like a for-loop with while-loop inside of it? If you're writing an algorithm that uses nested for-loops where the inner counting variable depends on the outer counting variable should you be concerned about what this will do to the complexity of your algorithm?

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • What type of webapp is the sweet spot for Scala's Lift framework?

    - by ajay
    What kind of applications are the sweet spot for Scala's lift web framework. My requirements: Ease of development and maintainability Ready for production purposes. i.e. good active online community, regular patches and updates for security and performance fixes etc. Framework should survive a few years. I don't want to write a app in a framework for which no updates/patches are available after 1 year. Has good UI templating engines Interoperation with Java (Scala satisfies this arleady. Just mentioning here for completeness sake) Good component oriented development. Time required to develop should be proportion to the complexity of web application. Should not be totally configuration based. I hate it when code gets automatically generated for me and does all sorts of magic under the hood. That is a debugging nightmare. Amount of Lift knowledge required to develop a webapp should be proportional to the complexity of the web application. i.e I should't have to spend 10+ hours learning Lift just to develop a simple TODO application. (I have knowledge of Databases, Scala) Does Lift satisfy these requirements?

    Read the article

  • haskell: a data structure for storing ascending integers with a very fast lookup

    - by valya
    Hello! (This question is related to my previous question, or rather to my answer to it.) I want to store all qubes of natural numbers in a structure and look up specific integers to see if they are perfect cubes. For example, cubes = map (\x -> x*x*x) [1..] is_cube n = n == (head $ dropWhile (<n) cubes) It is much faster than calculating the cube root, but It has complexity of O(n^(1/3)) (am I right?). I think, using a more complex data structure would be better. For example, in C I could store a length of an already generated array (not list - for faster indexing) and do a binary search. It would be O(log n) with lower ?oefficient than in another answer to that question. The problem is, I can't express it in Haskell (and I don't think I should). Or I can use a hash function (like mod). But I think it would be much more memory consuming to have several lists (or a list of lists), and it won't lower the complexity of lookup (still O(n^(1/3))), only a coefficient. I thought about a kind of a tree, but without any clever ideas (sadly I've never studied CS). I think, the fact that all integers are ascending will make my tree ill-balanced for lookups. And I'm pretty sure this fact about ascending integers can be a great advantage for lookups, but I don't know how to use it properly (see my first solution which I can't express in Haskell).

    Read the article

  • Data Access Layer, Best Practices

    - by labratmatt
    I'm looking for input on the best way to refactor the data access layer (DAL) in my PHP based web app. I follow an MVC pattern: PHP/HTML/CSS/etc. views on the front end, PHP controllers/services in the middle, and a PHP DAL sitting on top of a relational database in the model. Pretty standard stuff. Things are working fine, but my DAL is getting large (codesmell?) and becoming a bit unwieldy. My DAL contains almost all of the logic to interface with my database and is full of functions that look like this: function getUser($user_id) { $statement = "select id, name from users where user_id=:user_id"; PDO builds statement and fetchs results as an array return $array_of_results_generated_by_PDO_fetch_method; } Notes: The logic in my controller only interacts with the model using functions like the above in the DAL I am not using a framework (I'm of the opinion that PHP is a templating language and there's no need to inject complexity via a framework) I generally use PHP as a procedural language and tend to shy away from its OOP approach (I enjoy OOP development but prefer to keep that complexity out of PHP) What approaches have you taken when your DAL has reached this point? Do I have a fundamental design problem? Do I simply need to chop my DAL into a number of smaller files (logically divide it)? Thanks.

    Read the article

  • The fastest way to iterate through a collection of objects

    - by Trev
    Hello all, First to give you some background: I have some research code which performs a Monte Carlo simulation, essential what happens is I iterate through a collection of objects, compute a number of vectors from their surface then for each vector I iterate through the collection of objects again to see if the vector hits another object (similar to ray tracing). The pseudo code would look something like this for each object { for a number of vectors { do some computations for each object { check if vector intersects } } } As the number of objects can be quite large and the amount of rays is even larger I thought it would be wise to optimise how I iterate through the collection of objects. I created some test code which tests arrays, lists and vectors and for my first test cases found that vectors iterators were around twice as fast as arrays however when I implemented a vector in my code in was somewhat slower than the array I was using before. So I went back to the test code and increased the complexity of the object function each loop was calling (a dummy function equivalent to 'check if vector intersects') and I found that when the complexity of the function increases the execution time gap between arrays and vectors reduces until eventually the array was quicker. Does anyone know why this occurs? It seems strange that execution time inside the loop should effect the outer loop run time.

    Read the article

  • How can I cluster short messages [Tweets] based on topic ? [Topic Based Clustering]

    - by Jagira
    Hello, I am planning an application which will make clusters of short messages/tweets based on topics. The number of topics will be limited like Sports [ NBA, NFL, Cricket, Soccer ], Entertainment [ movies, music ] and so on... I can think of two approaches to this Ask for users to tag questions like Stackoverflow does. Users can select tags from a predefined list of tags. Then on server side I will cluster them on based of tags. Pros:- Simple design. Less complexity in code. Cons:- Choices for users will be restricted. Clusters will not be dynamic. If a new event occurs, the predefined tags will miss it. Take the message, delete the stopwords [ predefined in a dictionary ] and apply some clustering algorithm to make a cluster and depending on its popularity, display the cluster. The cluster will be maintained according to its sustained popularity. New messages will be skimmed and assigned to corresponding clusters. Pros:- Dynamic clustering based on the popularity of the event/accident. Cons:- Increased complexity. More server resources required. I would like to know whether there are any other approaches to this problem. Or are there any ways of improving the above mentioned methods? Also suggest some good clustering algorithms.I think "K-Nearest Clustering" algorithm is apt for this situation.

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • Pointing to array element

    - by regular
    What I'm trying to achieve is say i have an array, i want to be able to modify a specific array element throughout my code, by pointing at it. for example in C++ i can do this int main(){ int arr [5]= {1,2,3,4,5}; int *c = &arr[3]; cout << arr[3] <<endl; *c = 0; cout << arr[3]<<endl; } I did some googling and there seems to be a way to do it through 'unsafe', but i don't really want to go that route. I guess i could create a variable to store the indexes, but I'm actually dealing with slightly more complexity (a list within a list. so having two index variables seems to add complexity to the code.) C# has a databinding class, so what I'm currently doing is binding the array element to a textbox (that i have hidden) and modifying that textbox whenever i want to modify the specific array element, but that's also not a good solution (since i have a textbox that's not being used for its intended purpose - a bit misleading).

    Read the article

  • When to use a foreign key in MySQL

    - by Mel
    Is there official guidance or a threshold to indicate when it is best practice to use a foreign key in a MySQL database? Suppose you created a table for movies. One way to do it is to integrate the producer and director data into the same table. (movieID, movieName, directorName, producerName). However, suppose most directors and producers have worked on many movies. Would it be best to create two other tables for producers and directors, and use a foreign key in the movie table? When does it become best practice to do this? When many of the directors and producers are appearing several times in the column? Or is it best practice to employ a foreign key approach at the start? While it seems more efficient to use a foreign key, it also raises the complexity of the database. So when does the trade off between complexity and normalization become worth it? I'm not sure if there is a threshold or a certain number of cell repetitions that makes it more sensible to use a foreign key. I'm thinking about a database that will be used by hundreds of users, many concurrently. Many thanks!

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >