Search Results

Search found 65446 results on 2618 pages for 'projects data warehouse'.

Page 28/2618 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How to Create a Realistic Timeline for your Projects

    - by Aditi
    Developing a Realistic project time line is a biggest and most challenging task of any team. We here at JustSkins, have learned over time that developing and adhering to a timeline isn’t easy but is not impossible. Keeping in consideration from any technical glitches to a human resource issue, unexpected complications can come up at any time during the entire project life cycle, How ever there are many things you can do in order to save the project from going off-track there. A specific timeline is very important statistic for time management planning and keeping your client informed of the progress. Have a rigid time tracking assures the client, that you are committed to achieving specific project milestones in time. The more you work on varied IT projects, the more you know about the aspects of project and you get to better develop future estimates and timelines. Make a Structure When estimating the time required to accomplish each task, consider which all team members will be involved, also assign the amount of time each individual must put in to the project. Define Scope & dependability and set deadlines for accomplishing them. Sometimes Working in Phases or modules help in doing more in lesser time. One must use a Project management tool in order to systematize the collaboration between the team members. Realistic Goal Setting One approach is to keep a bandwidth of few days to deal with delay, errors & incorrect coding issues you are likely to have in the course. It is very realistic to keep delivery date to client different then internal delivery timeline. If your resource is having hard time finishing this task in the time specified, keep some room to give him a day or two extra to accomplish his task. This does not upset client delivery and is the safe way of doing projects. Keep and Insightful Approach Identify potential problems before they delay your project. To be a great IT manager you have to be honest & diplomatic at the same time, it is essential for you to give earlier notice of potential delays or scope changes to your clients. In situation where delay is inevitable you should be in a position to provide immediate, on-demand status progress reports. Learning from past experiences if very important one must keep a track of actual time spent on all aspects of the projects, this will help you create better future estimates and timelines.

    Read the article

  • Analyzing data from same tables in diferent db instances.

    - by Oscar Reyes
    Short version: How can I map two columns from table A and B if they both have a common identifier which in turn may have two values in column C Lets say: A --- 1 , 2 B --- ? , 3 C ----- 45, 2 45, 3 Using table C I know that id 2 and 3 belong to the same item ( 45 ) and thus "?" in table B should be 1. What query could do something like that? EDIT Long version ommited. It was really boring/confusing EDIT I'm posting some output here. From this query: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) I have the following parts: select distinct( activityin ) from taskperformance where rolein = 0 Output: http://question1337216.pastebin.com/f5039557 select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) Output: http://question1337216.pastebin.com/f6cef9393 And finally: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) Output: http://question1337216.pastebin.com/f346057bd Take for instace activityin 335 from first query ( from taskperformance B) . It is present in actvities from A. But is not in taskperformace in A ( but a the related activities: 92, 208, 335, 595 ) Are present in the result. The corresponding role in is: 1

    Read the article

  • Extract wrong data from a frame in C?

    - by ipkiss
    I am writing a program that reads the data from the serial port on Linux. The data are sent by another device with the following frame format: |start | Command | Data | CRC | End | |0x02 | 0x41 | (0-127 octets) | | 0x03| ---------------------------------------------------- The Data field contains 127 octets as shown and octet 1,2 contains one type of data; octet 3,4 contains another data. I need to get these data. Because in C, one byte can only holds one character and in the start field of the frame, it is 0x02 which means STX which is 3 characters. So, in order to test my program, On the sender side, I construct an array as the frame formatted above like: char frame[254]; frame[0] = 0x02; // starting field frame[1] = 0x41; // command field which is character 'A' ..so on.. And, then On the receiver side, I take out the fields like: char result[254]; // read data read(result); printf("command = %c", result[1]); // get the command field of the frame // get other field's values the command field value (result[1]) is not character 'A'. I think, this because the first field value of the frame is 0x02 (STX) occupying 3 first places in the array frame and leading to the wrong results on the receiver side. How can I correct the issue or am I doing something wrong at the sender side? Thanks all. related questions: http://stackoverflow.com/questions/2500567/parse-and-read-data-frame-in-c http://stackoverflow.com/questions/2531779/clear-data-at-serial-port-in-linux-in-c

    Read the article

  • Learning c++ by contributing to open source projects

    - by user1189880
    I have some general programming experience with a few different languages, my most skilled being php. I want to spend a lot of time over the next year learning c++ in much more depth and then eventually get to a good enough level to find a job as a junior developer working in c++. I really struggle to find things to develop as toy programs so want to contribute to an open source project in c++ to get really stuck in to. But the projects I see on github in c++ are very large and will require a lot of knowlege to even get started. Are there any smaller projects that I can contribute to or are there any other good ideas for learning c++ from a practical level.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Visual Studio 2012 - Setting the target framework in C++ Projects

    - by Igor Milovanovic
    The Visual Studio 2012 doesn’t have a UI to set the Target Framework in C++ Projects.     Target Framework : 4.0   The online documentation does say to edit the .vcxproj project and change the TargetFrameworkVersion Tag. However, The C++ projects don’t have that tag by default. They just assume that the target framework is v4.0.   Instead, you have to add the TargetFrameworkVersion-Tag to the PropertyGroup Globals.   1: <PropertyGroup Label="Globals"> 2: ... 3: <RootNamespace>...</RootNamespace> 4: <TargetFrameworkVersion>v4.5</TargetFrameworkVersion> 5: </PropertyGroup>   When you reload the project, the target framework version in your project will be changed. Target Framework : 4.5   [1] How to: Modify the Target Framework and Platform Toolset http://msdn.microsoft.com/en-us/library/ff770576.aspx

    Read the article

  • Oracle Data Integrator at Oracle OpenWorld 2012: Demonstrations

    - by Irem Radzik
    By Mike Eisterer Oracle OpenWorld is just a few days away and  we look forward to showing Oracle Data Integrator' comprehensive data integration platform, which delivers critical data integration requirements: from high-volume, high-performance batch loads, to event-driven, trickle-feed integration processes, to SOA-enabled data services.  Several Oracle Data Integrator demonstrations will be available October 1st through the3rd : Oracle Data Integrator and Oracle GoldenGate for Oracle Applications, in Moscone South, Right - S-240 Oracle Data Integrator and Service Integration, in Moscone South, Right - S-235 Oracle Data Integrator for Big Data, in Moscone South, Right - S-236 Oracle Data Integrator for Enterprise Data Warehousing, in Moscone South, Right - S-238 Additional information about OOW 2012 may be found for the following demonstrations. If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.  

    Read the article

  • Which algorithms/data structures should I "recognize" and know by name?

    - by Earlz
    I'd like to consider myself a fairly experienced programmer. I've been programming for over 5 years now. My weak point though is terminology. I'm self-taught, so while I know how to program, I don't know some of the more formal aspects of computer science. So, what are practical algorithms/data structures that I could recognize and know by name? Note, I'm not asking for a book recommendation about implementing algorithms. I don't care about implementing them, I just want to be able to recognize when an algorithm/data structure would be a good solution to a problem. I'm asking more for a list of algorithms/data structures that I should "recognize". For instance, I know the solution to a problem like this: You manage a set of lockers labeled 0-999. People come to you to rent the locker and then come back to return the locker key. How would you build a piece of software to manage knowing which lockers are free and which are in used? The solution, would be a queue or stack. What I'm looking for are things like "in what situation should a B-Tree be used -- What search algorithm should be used here" etc. And maybe a quick introduction of how the more complex(but commonly used) data structures/algorithms work. I tried looking at Wikipedia's list of data structures and algorithms but I think that's a bit overkill. So I'm looking more for what are the essential things I should recognize?

    Read the article

  • Understanding Data Science: Recent Studies

    - by Joe Lamantia
    If you need such a deeper understanding of data science than Drew Conway's popular venn diagram model, or Josh Wills' tongue in cheek characterization, "Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician." two relatively recent studies are worth reading.   'Analyzing the Analyzers,' an O'Reilly e-book by Harlan Harris, Sean Patrick Murphy, and Marck Vaisman, suggests four distinct types of data scientists -- effectively personas, in a design sense -- based on analysis of self-identified skills among practitioners.  The scenario format dramatizes the different personas, making what could be a dry statistical readout of survey data more engaging.  The survey-only nature of the data,  the restriction of scope to just skills, and the suggested models of skill-profiles makes this feel like the sort of exercise that data scientists undertake as an every day task; collecting data, analyzing it using a mix of statistical techniques, and sharing the model that emerges from the data mining exercise.  That's not an indictment, simply an observation about the consistent feel of the effort as a product of data scientists, about data science.  And the paper 'Enterprise Data Analysis and Visualization: An Interview Study' by researchers Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffery Heer considers data science within the larger context of industrial data analysis, examining analytical workflows, skills, and the challenges common to enterprise analysis efforts, and identifying three archetypes of data scientist.  As an interview-based study, the data the researchers collected is richer, and there's correspondingly greater depth in the synthesis.  The scope of the study included a broader set of roles than data scientist (enterprise analysts) and involved questions of workflow and organizational context for analytical efforts in general.  I'd suggest this is useful as a primer on analytical work and workers in enterprise settings for those who need a baseline understanding; it also offers some genuinely interesting nuggets for those already familiar with discovery work. We've undertaken a considerable amount of research into discovery, analytical work/ers, and data science over the past three years -- part of our programmatic approach to laying a foundation for product strategy and highlighting innovation opportunities -- and both studies complement and confirm much of the direct research into data science that we conducted. There were a few important differences in our findings, which I'll share and discuss in upcoming posts.

    Read the article

  • IOMEGA 500GB hard disk data reccovery

    - by Vineeth
    Last year by November I bought an IOMEGA 500GB Prestige hard disk. Yesterday, unfortunately the hard disk fell down from my table. After that incident, when I connect my disk, Windows asks me to format the disk to use, but I didn't format it yet. Actually, on that hard disk I have about 320GB of data. I tried all my possible ways to access my disk. I tried using DOS. It shows "data error (Cyclic redundancy check)". I have a 3 year warranty. Will I be covered under warranty if I report this issue to IOMEGA? Can I get my data back?

    Read the article

  • managing information/functionality on shared common project classes

    - by ilansch
    In my company, we have a common solution the contains common projects (2 projects so far, one for .net 3.5 and one for .net 4.5). My main problem is that during time, a lot of code is added, for example hosting a process as windows service is a class called ServiceManagement, But no one but the developer knows it, and if someone wants to use this shared class, he does not know it exist. So i am looking for a way to document and manage all the classes with tags, a 3rd party util/web util, that i can search for tags and maybe find common classes that i can use (if we keep all our code well-documented). Does anyone familiar with sort of tools ?

    Read the article

  • how to find a good data center?

    - by drewda
    At my start-up, we're getting to the point where we should be hosting our servers at a data center. I'd appreciate any tips and tricks y'all can offer on finding a reputable place to colocate our racks. Are there any Web sites with customer reviews of data centers or should I just be asking around at techie events? Are unlimited bandwidth plans a gimmick or becoming the norm? Is it worth establishing a redundant set of machines at a second data center from Day One? Or just do offsite back-ups? Thanks for your suggestions.

    Read the article

  • Appending column to a data frame - R

    - by darkie15
    Is it possible to append a column to data frame in the following scenario? dfWithData <- data.frame(start=c(1,2,3), end=c(11,22,33)) dfBlank <- data.frame() ..how to append column start from dfWithData to dfBlank? It looks like the data should be added when data frame is being initialized. I can do this: dfBlank <- data.frame(dfWithData[1]) but I am more interested if it is possible to append columns to an empty (but inti)

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • Using my own code in freelance projects.

    - by Witchunter
    I have been into freelance business for more than 2 years. While doing projects for other people, I've build a compilation of common tasks that I implement in projects and put them into code. It's kind of a library with some functions that I can reuse without having to rewrite the same thing dozen times. I'm talking about accessing Access databases, downloading information from FTP and similar stuff. Is this acceptable from a legal point of view? What's the difference in reusing the old code and rewriting it from the scratch (using you own brain again, therefore the exact same logic)? I do not hold any copyright to it, of course, and provide the source code for these classes to my clients.

    Read the article

  • using core data with web services

    - by Jayshree
    Hi. i am a noob in xcode. I am developing an iphone app where i need to send and receive data from a web service. And i need to store them temporarily in my app. i dont want to use sqlite. so i was wondering if i should use core data for this purpose. I read some articles but i still dont have a clear picture of how to do it, coz i have used core data only with sqlite. I want to do the following things : Will receive table data from a web service. Have to perform certain calculations on those fields. Will send the data back in xml format to the server. How do i convert the xml data into int, date or any other data type? and how do i store it in managed data objects? Can anyone please help me with this??? thnx for your time.

    Read the article

  • git in non-distributed, independent, lone programming ...best practice(s) ?

    - by explorest
    I am currently studying the git documentation to get a hang of distributed version control workflow and use of git command line. I want to first start using git with small, personal, pet projects so to gain experience before doing it on large scale (i.e., bigger projects, team dev). What areas of the git system should I, as a lone player, devote most of my study time to... what parts should I leave for the larger scale work later on. In other words what features of the git system will fully be grasped in team work only, and therefore should not be too involved with at an individual level?

    Read the article

  • Why CFOs Should Care About Big Data

    - by jmorourke
    The topic of “big data” clearly has reached a tipping point in 2012.  With plenty of coverage over the past few years in the IT press, we are now starting to see the topic of “big data” covered in mainstream business press, including a cover story in the October 2012 issue of the Harvard Business Review.  To help customers understand the challenges of managing “big data” as well as the opportunities that can be created by leveraging “big data”, Oracle has recently run and published the results of a customer survey, as well as white papers and articles on this topic.  Most recently, we commissioned a white paper titled “Mastering Big Data: CFO Strategies to Transform Insight into Opportunity”. The premise here is that “big data” is not just a topic that CIOs should pay attention to, but one that CFOs should understand and take advantage of as well.  Clearly, whoever masters the art and science of big data will be positioned for competitive advantage in their industries or markets.  That’s why smart CFOs are taking control of big data and business analytics projects, not just to uncover new ways to drive growth in a slowing global economy, but also to be a catalyst for change in the enterprise.  With an increasing number of CFOs now responsible for overseeing IT investments and providing strategic insight to the board, CFOs will be increasingly called upon to take a leadership role in assessing the value of “big data” initiatives, building on their traditional skills in reporting and helping managers analyze data to support decision making. Here’s a link to the white paper referenced above, which is posted on the Oracle C-Central/CFO web site, as well as some other resources that can help CFOs master the topic of “big data”: White Paper “Mastering Big Data:  CFO Strategies to Transform Insight into Opportunity CFO Market Watch article:  “Does Big Data Affect the CFO?” Oracle Survey Report:  “From Overload to Impact – An Industry Scorecard on Big Data Industry Challenges” Upcoming Big Data Webcast with Andrew McAfee Here’s a general link to Oracle C-Central/CFO in case you want to start there: www.oracle.com/c-central/cfo Feel free to contact me if you have any questions or need additional information:  [email protected]

    Read the article

  • data recovery from unallocated harddisk partition

    - by user42151
    Hi I accidentally deleted a partition which mainly served as space I put my data, labeled D: drive. The partition wasn't subsequently formatted though, following the delete incident. Obviously the D: drive doesn't show up as it usually does when I run Windows 7. In the "Computer Management", on clicking the Disk Management I clearly see the space is now labled as unallocated. question: How do I go about recovering my data. Perhaps what the effective data recovery software I can use to resolve this issue. Thanks

    Read the article

  • data recovery from unallocated harddisk partition

    - by user36007
    Hi, I accidentally deleted a partition which mainly served as space I put my data, labeled D: drive. The partition wasn't subsequently formatted though, following the delete incident. Obviously the D: drive doesn't show up as it usually does when I run Windows 7. In the "Computer Management", on clicking the Disk Management I clearly see the space is now labled as unallocated. question: How do I go about recovering my data. Perhaps what the effective data recovery software I can use to resolve this issue. Thanks

    Read the article

  • data recovery from unallocated harddisk partition

    - by user36007
    Hi, I accidentally deleted a partition which mainly served as space I put my data, labeled D: drive. The partition wasn't subsequently formatted though, following the delete incident. Obviously the D: drive doesn't show up as it usually does when I run Windows 7. In the "Computer Management", on clicking the Disk Management I clearly see the space is now labled as unallocated. question: How do I go about recovering my data. Perhaps what the effective data recovery software I can use to resolve this issue. Thanks

    Read the article

  • Having a generic data type for a database table column, is it "good" practice?

    - by Yanick Rochon
    I'm working on a PHP project where some object (class member) may contain different data type. For example : class Property { private $_id; // (PK) private $_ref_id; // the object reference id (FK) private $_name; // the name of the property private $_type; // 'string', 'int', 'float(n,m)', 'datetime', etc. private $_data; // ... // ..snip.. public getters/setters } Now, I need to perform some persistence on these objects. Some properties may be a text data type, but nothing bigger than what a varchar may hold. Also, later on, I need to be able to perform searches and sorting. Is it a good practice to use a single database table for this (ie. is there a non negligible performance impact)? If it's "acceptable", then what could be the data type for the data column?

    Read the article

  • Is there a way to set two C#.Net projects to trust one another?

    - by Eric
    I have two C#.NET projects in a single solution ModelProject and PluginProject. PlugInProject is a plug-in for another application, and consequently references its API. PlugInProject is used to extract data from the application it plugs into. ModelProject contains the data model of classes that are extracted by PlugInProject. The extracted data can be used independent of the application or the plug-in, which is why I am keeping PlugInProject separate from ModelProject. I want ModelProject to remain independent of PlugInProject, and the Applications API. Or in other words I want someone to be able to access the extracted data without needing access to PlugInProject, the application, or the application's API. The problem I'm running into though is PlugInProject needs to be able to create and modify classes in ModelProject. However, I'd prefer to not make these actions public to anyone using ModelProject. The extracted data should effectively be read-only, unless later modified by PlugInProject. How can I keep these projects separate but give PlugInProject exclusive access to ModelProject? Is this possible?

    Read the article

  • What issues carry the highest risk in a software project?

    - by Mehrdad
    Clearly, software projects are different from other industries in terms of many things like for instance, quality assurance, project progress measurement, and many other things. Unique characteristics of software projects also makes the risk management process unique. Lots of issues in a project might lead it to unacceptable delay or failure to deliver business value. They might even make a complete disaster in the project. What are the deadliest risk factors in a software project? How to analyze, prevent and handle them? Particularly, I'm interested in the issues that you can detect from the beginning and you should keep an eye on (for example, you might be told about a third-party API that the current application uses and lacks documentation). Please share your experiences if they are relevant.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >