Search Results

Search found 5521 results on 221 pages for 'deeper understanding'.

Page 27/221 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • When should I use Yield in c#?

    - by Steve
    I have a vage understanding of the Yield keyword in c#, but I haven't yet seen the need to use it in my code. This probably comes from a lack of understanding of it.. So; What are some typical good usages of Yield?

    Read the article

  • fetchBatchSize to be same as fetchLimit

    - by user1730622
    What does it mean to have fetchBatchSize to be the same as fetchLimit, say both are set to be 5. My understanding is that, with the fetchLimit, then only 5 records will be in the fetch result set; and additionally with the fetchBatchSize, only the ids/identities of the records will be read to the memory, and then the full records won't be retrieved until they are accessed. Is that a correct understanding?

    Read the article

  • Can you suggest some UI related flex 3 interview questions for a senior pos?

    - by mohan talluri
    Our Company is looking for a sr.flex developer. As part of interview process customized UI understanding and implementation is also included. I am Usability&design Lead for the same product team have some understanding of flex 3 but am not sure if pure UI/usability questions can be answered by flex developer. So can you suggest some UI related questions to see if he/she has competency to refer a prototype(html/mockup's) and build the same UI in flex.

    Read the article

  • A nuts and bolts reference to C# performance and memory use

    - by phil
    I wonder if anyone could point me in the direction where I can read about the nuts and bolts of C#. What I'm interested in learning are method call costs, what it costs to create objects and such. My aim of learning this is to get a better understanding of how increase the performance of an application and get a better understanding of how the C# language works. The reference should preferable be a book, a book that I can read cover to cover.

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Making user input/math on data fast, unlike excel type programs

    - by proGrammar
    I'm creating a research platform solely for myself to do some research on data. Programs like excel are terribly slow for me so I'm trying to come up with another solution. Originally I used excel. A1 was the cell that contained the data and all other cells in use calculated something on A1, or on other cells, that all could be in the end traced to A1. A1 was like an element of an array, I then I incremented it to go through all my data. This was way too slow. So the only other option I found originally was to hand code in c# the calculations inside a loop. Then I simply recompiled each time I changed my math. This was terribly slow to do and I had to order everything correctly so things would update correctly (dependencies). I could have also used events, but hand coding events for each cell like calculation would also be very slow. Next I created an application to read Excel and to perfectly imitate it. Which is what I now use. Basically I write formulas onto a fraction of my data to get live results inside excel. Then my program reads excel, writes another c# program, compiles it, and runs that program which runs my excel created formulas through a lot more data a whole lot faster. The advantage being my application dependency sorts everything (or I could use events) so I don't have to (like excel does) And of course the speed. But now its not a single application anymore. Instead its 2 applications, one which only reads my formulas and writes another program. The other one being the result which only lives for a short while before I do other runs through my data with different formulas / settings. So I can't see multiple results at one time without introducing even more programs like a database or at least having the 2 applications talking to each other. My idea was to have a dll that would be written, compiled, loaded, and unloaded again and again. So a self-updating program, sort of. But apparently that's not possible without another appdomain which means data has to be marshaled to be moved between the appdomains. Which would slow things down, not for summaries, but for other stuff I need to do with all my data. I'm also forgetting to mention a huge problem with restarting an application again and again which is having to reload ALL my data into memory again and again. But its still a whole lot faster than excel. I'm really super puzzled as to what people do when they want to research data fast. I'm completely unable to have a program accept user input and having it fast. My understanding is that it would have to do things like excel which is to evaluate strings again and again. So my only option is to repeatedly compile applications. Do I have a correct understanding on computer science? I've only just began programming, and didn't think I would have to learn much to do some simple math on data. My understanding is its either compiling my user defined stuff to a program or evaluating them from a string or something stupid again and again. And my only option is to probably switch operating systems or something to be able to have a program compile and run itself without stopping (writing/compiling dll, loading dll to program, unloading, and repeating). Can someone give me some idea on how computers work? Is anything better possible? Like a running program, that can accept user input and compile it and then unload it later? I mean heck operating systems dont need to be RESTARTED with every change to user input. What is this the cave man days? Sorry, it's just so super frustrating not knowing what one can do, and can't do. If only I could understand and learn this stuff fast enough.

    Read the article

  • How you would you describe the Observer pattern in beginner language?

    - by Sheldon
    Currently, my level of understanding is below all the coding examples on the web about the Observer Pattern. I understand it simply as being almost a subscription that updates all other events when a change is made that the delegate registers. However, I'm very unstable in my true comprehension of the benefits and uses. I've done some googling, but most are above my level of understanding. I'm trying to implement this pattern with my current homework assignment, and to truly make sense on my project need a better understanding of the pattern itself and perhaps an example to see what its use. I don't want to force this pattern into something just to submit, I need to understand the purpose and develop my methods accordingly so that it actually serves a good purpose. My text doesn't really go into it, just mentions it in one sentence. MSDN was hard for me to understand, as I'm a beginner on this, and it seems more of an advanced topic. How would you describe this Observer pattern and its uses in C# to a beginner? For an example, please keep code very simple so I can understand the purpose more than complex code snippets. I'm trying to use it effectively with some simple textbox string manipulations and using delegates for my assignment, so a pointer would help!

    Read the article

  • Properties vs. Fields: Need help grasping the uses of Properties over Fields.

    - by pghtech
    First off, I have read through a list of postings on this topic and I don't feel I have grasped properties because of what I had come to understand about encapsulation and field modifiers (private, public..ect). One of the main aspects of C# that I have come to learn is the importance of data protection within your code by the use of encapsulation. I 'thought' I understood that to be because of the ability of the use of the modifiers (private, public, internal, protected). However, after learning about properties I am sort of torn in understanding not only properties uses, but the overall importance/ability of data protection (what I understood as encapsulation) within C#. To be more specific, everything I have read when I got to properties in C# is that you should try to use them in place of fields when you can because of: 1) they allow you to change the data type when you can't when directly accessing the field directly. 2) they add a level of protection to data access However, from what I 'thought' I had come to know about the use of field modifiers did #2, it seemed to me that properties just generated additional code unless you had some reason to change the type (#1) - because you are (more or less) creating hidden methods to access fields as opposed to directly. Then there is the whole modifiers being able to be added to Properties which further complicates my understanding for the need of properties to access data. I have read a number of chapters from different writers on "properties" and none have really explained a good understanding of properties vs. fields vs. encapsulation (and good programming methods). Can someone explain: 1) why I would want to use properties instead of fields (especially when it appears I am just adding additional code 2) any tips on recognizing the use of properties and not seeing them as simply methods (with the exception of the get;set being apparent) when tracing other peoples code? 3) Any general rules of thumb when it comes to good programming methods in relation to when to use what? Thanks and sorry for the long post - I didn't want to just ask a question that has been asked 100x without explaining why I am asking it again.

    Read the article

  • Is there really such a thing as "being good at math"?

    - by thezhaba
    Aside from gifted individuals able to perform complex calculations in their head, I'm wondering if proficiency in mathematics, namely calculus and algebra, has really got to do with one's natural inclination towards sciences, if you can put it that way. A number of students in my calculus course pick up material in seemingly no time whereas I, personally, have to spend time thinking about and understanding most concepts. Even then, if a question that requires a bit more 'imagination' comes up I don't always recognize the concepts behind it, as is the case with calculus proofs, for instance. Nevertheless, I refuse to believe that I'm simply not made for it. I do very well in programming and software engineering courses where a lot of students struggle. At first I could not grasp what they found to be so difficult, but eventually I realized that having previous programming experience is a great asset -- once I've seen and made practical use of the programming concepts learning about them in depth in an academic setting became much easier as I have then already seen their use "in the wild". I suppose I'm hoping that something similar happens with mathematics -- perhaps once the practical idea behind a concept (which authors of textbooks sure do a great job of concealing..) is evident, understanding the seemingly dry and symbolic ideas and proofs would be more obvious? I'm really not sure. All I'm sure of is I'd like to get better at calculus, but I don't yet understand why some of us pick it up easily while others have to spend considerable amounts of time on it and still not have complete understanding if an unusual problem is given.

    Read the article

  • Book Review: Programming Windows Identity Foundation

    - by DigiMortal
    Programming Windows Identity Foundation by Vittorio Bertocci is right now the only serious book about Windows Identity Foundation available. I started using Windows Identity Foundation when I made my first experiments on Windows Azure AppFabric Access Control Service. I wanted to generalize the way how people authenticate theirselves to my systems and AppFabric ACS seemed to me like good point where to start. My first steps trying to get things work opened the door to whole new authentication world for me. As I went through different blog postings and articles to get more information I discovered that the thing I am trying to use is the one I am looking for. As best security API for .NET was found I wanted to know more about it and this is how I found Programming Windows Identity Foundation. What’s inside? Programming WIF focuses on architecture, design and implementation of WIF. I think Vittorio is very good at teaching people because you find no too complex topics from the book. You learn more and more as you read and as a good thing you will find that you can also try out your new knowledge on WIF immediately. After giving good overview about WIF author moves on and introduces how to use WIF in ASP.NET applications. You will get complete picture how WIF integrates to ASP.NET request processing pipeline and how you can control the process by yourself. There are two chapters about ASP.NET. First one is more like introduction and the second one goes deeper and deeper until you have very good idea about how to use ASP.NET and WIF together, what issues you may face and how you can configure and extend WIF. Other two chapters cover using WIF with Windows Communication Foundation (WCF) band   Windows Azure. WCF chapter expects that you know WCF very well. This is not introductory chapter for beginners, this is heavy reading if you are not familiar with WCF. The chapter about Windows Azure describes how to use WIF in cloud applications. Last chapter talks about some future developments of WIF and describer some problems and their solutions. Most interesting part of this chapter is section about Silverlight. Who should read this book? Programming WIF is targeted to developers. It does not matter if you are beginner or old bullet-proof professional – every developer should be able to be read this book with no difficulties. I don’t recommend this book to administrators and project managers because they find almost nothing that is related to their work. I strongly recommend this book to all developers who are interested in modern authentication methods on Microsoft platform. The book is written so well that I almost forgot all things around me when I was reading the book. All additional tools you need are free. There is also Azure AppFabric ACS test version available and you can try it out for free. Table of contents Foreword Acknowledgments Introduction Part I Windows Identity Foundation for Everybody 1 Claims-Based Identity 2 Core ASP.NET Programming Part II Windows Identity Foundation for Identity Developers 3 WIF Processing Pipeline in ASP.NET 4 Advanced ASP.NET Programming 5 WIF and WCF 6 WIF and Windows Azure 7 The Road Ahead Index

    Read the article

  • Making the most of next weeks SharePoint 2010 developer training

    - by Eric Nelson
    [you can still register if you are free on the afternoons of 9th to 11th – UK time] We have 50+ registrations with more coming in – which is fantastic. Please read on to make the most of the training. Background We have structured the training to make sure that you can still learn lots during the three days even if you do not have SharePoint 2010 installed. Additionally the course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Which means if you have zero time between now and next Wednesday then you are still good to go. But if you can do some pre-work you will likely get even more out of the three days. Step 1: Check out the topics and resources available on-demand The course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Take a lap around the SharePoint 2010 Training Course on Channel 9 Download the SharePoint Developer Training Kit Step 2: Use a pre-configured Virtual Machine which you can download (best start today – it is large!) Consider using the VM we created If you don't have access to SharePoint 2010. You will need a 64bit host OS and bare minimum of 4GB of RAM. 8GB recommended. Virtual PC can not be used with this VM – Virtual PC only supports 32bit guests. The 2010-7a Information Worker VM gives you everything you need to develop for SharePoint 2010. Watch the Video on how to use this VM Download the VM Remember you only need to download the “parts” for the 2010-7a VM. There are 3 subtly different ways of using this VM: Easiest is to follow the advice of the video and get yourself a host OS of Windows Server 2008 R2 with Hyper-V and simply use the VM Alternatively you can take the VHD and create a “Boot to VHD” if you have Windows 7 Ultimate or Enterprise Edition. This works really well – especially if you are already familiar with “Boot to VHD” (This post I did will help you get started) Or you can take the VHD and use an alternative VM tool such as VirtualBox if you have a different host OS. NB: This tends to involve some work to get everything running fine. Check out parts 1 to 3 from Rolly and if you go with Virtual Box use an IDE controller not SATA. SATA will blue screen. Note in the screenshot below I also converted the vhd to a vmdk. I used the FREE Starwind Converter to do this whilst I was fighting blue screens – not sure its necessary as VirtualBox does now work with VHDs. or Step 3 – Install SharePoint 2010 on a 64bit Windows 7 or Vista Host I haven’t tried this but it is now supported. Check out MSDN. Final notes: I am in the process of securing a number of hosted VMs for ISVs directly managed by my team. Your Architect Evangelist will have details once I have them! Else we can sort out on the Wed. Regrettably I am unable to give folks 1:1 support on any issues around Boot to VHD, 3rd party VM products etc. Related Links: Check you are fully plugged into the work of my team – have you done these simple steps including joining our new LinkedIn group?

    Read the article

  • Proving What You are Worth

    - by Ted Henson
    Here is a challenge for everyone. Just about everyone has been asked to provide or calculate the Return on Investment (ROI), so I will assume everyone has a method they use. The problem with stopping once you have an ROI is that those in the C-Suite probably do not care about the ROI as much as Return on Equity (ROE). Shareholders are mostly concerned with their return on the money the invested. Warren Buffett looks at ROE when deciding whether to make a deal or not. This article will outline how you can add more meaning to your ROI and show how you can potentially enhance the ROE of the company.   First I want to start with a base definition I am using for ROI and ROE. Return on investment (ROI) and return on equity (ROE) are ways to measure management effectiveness, parts of a system of measures that also includes profit margins for profitability, price-to-earnings ratio for valuation, and various debt-to-equity ratios for financial strength. Without a set of evaluation metrics, a company's financial performance cannot be fully examined by investors. ROI and ROE calculate the rate of return on a specific investment and the equity capital respectively, assessing how efficient financial resources have been used. Typically, the best way to improve financial efficiency is to reduce production cost, so that will be the focus. Now that the challenge has been made and items have been defined, let’s go deeper. Most research about implementation stops short at system start-up and seldom addresses post-implementation issues. However, we know implementation is a continuous improvement effort, and continued efforts after system start-up will influence the ultimate success of a system.   Most UPK ROI’s I have seen only include the cost savings in developing the training material. Some will also include savings based on reduced Help Desk calls. Using just those values you get a good ROI. To get an ROE you need to go a little deeper. Typically, the best way to improve financial efficiency is to reduce production cost, which is the purpose of implementing/upgrading an enterprise application. Let’s assume the new system is up and running and all users have been properly trained and are comfortable using the system. You provide senior management with your ROI that justifies the original cost. What you want to do now is develop a good base value to a measure the current efficiency. Using usage tracking you can look for various patterns. For example, you may find that users that are accessing UPK assistance are processing a procedure, such as entering an order, 5 minutes faster than those that don’t.  You do some research and discover each minute saved in processing a claim saves the company one dollar. That translates to the company saving five dollars on every transaction. Assuming 100,000 transactions are performed a year, and all users improve their performance, the company will be saving $500,000 a year. That $500,000 can be re-invested, used to reduce debt or paid to the shareholders.   With continued refinement during the life cycle, you should be able to find ways to reduce cost. These are the type of numbers and productivity gains that senior management and shareholders want to see. Being able to quantify savings and increase productivity may also help when seeking a raise or promotion.

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by milomir.vojvodic
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialized partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue: Oracles London CITY Moorgate Offices Places are limited, Register from this Link Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.  Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day) For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected] and Milomir Vojvodic at [email protected] v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • No Customer Left Behind

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development What does customer experience mean to you? Is it a strategy for your executives? A new buzz word and marketing term? A bunch of CRM technology with social software added on? For me, customer experience is a customer-centric worldview that produces a deeper understanding of your business and what it takes to achieve sustainable, differentiated success. It requires you to prioritize and examine the journey your customers are on with your brand, so you can answer the question, "How can we drive greater value for our business by delivering a better customer experience?" Businesses that embrace a customer-centric worldview understand their business at a much deeper level than most. They know who their customers are, what their value is, what they do, what they say, what they want, and ultimately what that means to their business. "Why Isn't Everyone Doing It?" We're all consumers who have our own experiences with many brands. Good or bad, some of those experiences stay with us. So viscerally we understand the concept of customer experience from the stories we share. One that stands out in my mind happened as I was preparing to leave for a 12-month job assignment in Europe. I wanted to put my cable television subscription on hold. I wasn't leaving for another vendor. I wasn't upset. I just had a situation where it made sense to put my $180 per month account on pause until I returned. Unfortunately, there was no way for this cable company to acknowledge that I was a loyal customer with a logical request - and to respond accordingly. So, ultimately, they lost my business. Research shows us that it costs six to seven times more to acquire a new customer than to retain an existing one. Heavily funding the efforts of getting new customers and underfunding the efforts of serving the needs of your existing (who are your greatest advocates) is a vicious and costly cycle. "Hey, These Guys Suck!" I love my Apple iPad because it's so easy to use. The explosion of these types of technologies, combined with new media channels, has raised our expectations and made us hyperaware of what's going on and what's available. In addition, social media has given us a megaphone to share experiences both positive and negative with greater impact. We are now an always-on culture that thrives on our ability to access, connect, and share anywhere anytime. If we don't get the service, product, or value we expect, it is easy to tell many people about it. We also can quickly learn where else to get what we want. Consumers have the power of influence and choice at a global scale. The businesses that understand this principle are able to leverage that power to their advantage. The ones that don't, suffer from it. Which camp are you in?Note: This is Part 1 in a three-part series. Stop back for Part 2 on November 19.

    Read the article

  • Check packet vlan tag using Tap virtual interface

    - by ankit
    Hi all, I am trying to learn how to implement virtual interfaces using the Tap driver. So far my understanding is that using the tap driver I can create a virtual interface and then have a userspace program attach to this interface to analyse the data coming into this device. Now what if I attach a cisco switch to my LAN interface using a TRUNK link, forward all the packets coming into the LAN interface to the virtual tap interface and then in my program attached to this interface do some coding to analyze the vlan tag in the packet and only allow certain vlans to be forwarded to the WAN interface ? Does this sound plausible or is there is flaw in my basic understanding ? Thanks for the help! ankit

    Read the article

  • Do i need a dedicated server for load balancing?

    - by Ben
    I'm completely new to the concept of load balancing so i hope this question isn't a "stupid question" because i've been searching around and im having a hard time understanding this. So to my understanding, in order to load balance, i need a separate machine with an ip address i can direct all traffic to. I initially thought i needed to rent 3 dedicated servers, one for load balancing and the other two as backend servers. Would a dedicated server be too much for a load balancer or do hosting companies have special types of computers for that process? Then i read somewhere else that i can install a load balance software in both of the two servers and configure it in a way that doesn't require me to rent another machine/dedicated server for load balancing. So im a bit confuse on how to actually implement a load balancer and whether or not i need a dedicated server for the sole purpose of acting as a load balancing machine. Also, i was recommended to use HAproxy so i'll be heading that direction for load balancing.

    Read the article

  • How can I pass environment variables to a WSGI script, using uWSGI?

    - by orokusaki
    I've added the following line to /etc/environment: FOO_DEPLOYMENT_ENV="vbox" Upon logging in via SSH, I can echo $FOO_DEPLOYMENT_ENV and, of course, see vbox output to the shell. If I open a Python shell and run os.getenv('FOO_DEPLOYMENT_ENV'), it will return 'vbox', but the same code in my Python application, when run by uWSGI (as the www-data user), it does not see the environment variable. Clearly, this isn't a problem of uWSGI, and is rather a problem with my understanding of environment variables, or how they're properly set, and the contexts in which they can be retrieved. What am I doing or understanding incorrectly?

    Read the article

  • What is the DNS root zone and domain?

    - by Nimmy Lebby
    This might seem like a silly question but I want to get my terminology correct. Please do not delete. I will be more than happy to delete the question myself once I (with the help of a few people I hope) get to a consensus: This was my understanding: DNS root zone = . DNS root domain = (nameless) However, after reading the Wikipedia article, I'm not so sure: A domain name consists of one or more parts, technically called labels, that are conventionally concatenated, and delimited by dots, such as example.com. So this would lead me to believe: DNS root zone = . DNS root domain = . DNS root label = (nameless) Does this make sense? What is your understanding?

    Read the article

  • Is it possible to do a full Android backup without first rooting the phone?

    - by Howiecamp
    I'm running stock 2.1 on my Moto Droid and am interested in rooting. My (admittedly weak at this point) understanding is that, in order to perform a backup[*], you need to root first. But in order to root, you've got to replace the 2.1 image with a rooted 2.0.1 or a stock 2.0.1 and then a rooted 2.1. So there's no CYA protection given that you've got to take the risk of replacing the image in order to get root and then do a backup. [*] Ideally, I'd like to backup the stock 2.1 image AND my apps. Am I understanding this correctly, or is there a way to do a backup without first replacing the image?

    Read the article

  • Recommendations for managing dedicated server DNS

    - by KP Overflow
    I've rented a dedicated server for several years with a number of domains. I've got a coding background so am comfortable with that side of the tech, but I hate that I still don't truly understand DNS settings. Example: My provider (hostgator) just told me that my parent nameservers are not correctly configured as there is no A record for my primary nameserver. What book/link/tutorial should I read to go from kind of understanding that comment to really understanding it & knowing exactly what I need to do to fix it rather than trial & error which is what I usually do. Thanks BTW I'm using a WHM/cpanel linux setup at hostgator but am eager to learn the fundamentals.

    Read the article

  • Possible to write an implement of RAIDZ or RAIDZ2 for the MD driver in the Linux kernel?

    - by Pharaun
    I am curious on if it is possible to have an implement of RAIDZ and/or RAIDZ2 in the MD driver in the Linux kernel? From my understanding of it is that the RAIDZ version is equivalent to a RAID 5, and that a RAIDZ2 is equivalent to a RAID 6. The main difference is that the stripe size can be variable for RAIDZ as opposite to RAID 5/6 from my understanding, which helps performance. So what I am wondering is would it be possible to add this performance enhancing technique to RAID 5 & 6 in the MD driver in the kernel? Or is it tied too closely to how the ZFS works?

    Read the article

  • Dynamic Hierarchical Javascript Object Loop

    - by user1684586
    var treeData = {"name" : "A", "children" : [ {"name" : "B", "children": [ {"name" : "C", "children" :[]} ]} ]}; THE ARRAY BEFORE SHOULD BE EMPTY. THE ARRAY AFTER SHOULD BE POPULATED DEPENDING ON THE NUMBER OF NODES NEEDED THAT WILL BE DEFINED FROM A DYNAMIC VALUE THAT IS PASSED. I would like to build the hierarchy dynamically with each node created as a layer/level in the hierarchy having its own array of nodes. THIS SHOULD FORM A TREE STRUCTURE. This is hierarchy structure is described in the above code. This code has tree level simple for demonstrating the layout of the hierarchy of values. There should be a root node, and an undefined number of nodes and levels to make up the hierarchy size. Nothing should be fixed besides the root node. I do not need to read the hierarchy, I need to construct it. The array should start {"name" : "A", "children" : []} and every new node as levels would be created {"name" : "A", "children" : [HERE-{"name" : "A", "children" : []}]}. In the child array, going deeper and deeper. Basically the array should have no values before the call, except maybe the root node. After the function call, the array should comprise of the required nodes of a number that may vary with every call. Every child array will contain one or more node values. There should be a minimum of 2 node levels, including the root. It should initially be a Blank canvas, that is no predefined array values.

    Read the article

  • How to route all traffic over site to site VPN tunnel?

    - by Hutch
    I have a site to site VPN configured between our main site (Site A) and a remote site (Site B). Site A is 10.60.0.0/16 Site B is 192.168.99.0/24 The firewall in Site B is a Juniper SSG running ScreenOS 6.3 and I'm using a route based VPN. The tunnel works perfectly in that from Site A you can reach 192.168.99.0 via the tunnel, and from Site B you can reach 10.60.0.0 via the tunnel. However, we want it so that if you're in Site B and want the Internet it goes via the firewall at Site A, and right now on the Juniper 0.0.0.0 has the ISP router as next hop. My understanding is that on the Juniper, I can set a route for the /32 public IP at our main site that the VPN tunnel connects to to the ISP router via ethernet0/0 (the SSG's external interface), and then modify the 0.0.0.0 route to use our main site firewall via tunnel.1 (the VPN tunnel). Not sure I've explained that so well but is my understanding correct? Thanks

    Read the article

  • Saving tree-structures in Databases

    - by Nina Null
    Hello everyone. I use Hibernate/Spring and a MySQL Database for my data management. Currently I display a tree-structure in a JTable. A tree can have several branches, in turn a branch can have several branches (up to nine levels) again, or having leaves. Lately I have performanceproblemes, as soon as I want to create new branches on deeper levels. At this time a branch has a foreign key to its parent. The domainobject has access to its parent by calling getParent(), which returns the parent-branch. The deeper the level, the longer it takes to create a new branch. Microbenchmark results for creating a new branch are like: Level 1: 32 ms. Level 3: 80 ms. Level 9: 232 ms. Obviously the level (which means the number of parents) is responsible for this. So I wanted to ask, if there are any appendages to work around this kind of problem. I don’t understand why Hibernate needs to know about the whole object tree (all parents until the root) while creating a new branch. But as far as I know this can be the only reason for the delay while creating a new branch, because a branch doesn’t have any other relations to any other objects. I would be very thankful for any workarounds or suggestions. greets, jambusa

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >