Search Results

Search found 896 results on 36 pages for 'analyze'.

Page 4/36 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Oracle Unveils Oracle Social Relationship Management Suite at Oracle OpenWorld

    - by Richard Lefebvre
    New Service Enables Companies to Listen, Engage, Create, Market and Analyze Interactions across Multiple Social Platforms in Real-Time During his keynote presentation, Oracle CEO Larry Ellison announced the Oracle Social Relationship Management (SRM) Suite.   Oracle Social Relationship Management Suite is an integrated enterprise service that enables companies to listen, engage, create, market, and analyze interactions across multiple social platforms in real-time providing a holistic view of the consumer.   Oracle Social Relationship Management Suite is integrated with Oracle’s enterprise applications, including Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce, and Oracle Enterprise Resource Planning (ERP), allowing organizations to use social to transform their corporate business processes and systems.   Additionally, Oracle Social Relationship Management Suite is integrated with Oracle Platform Services, including Oracle Java Cloud Service and Oracle Database Cloud Service, enabling marketing teams to integrate social with their custom Web pages, landing pages and marketing tools. Unleashing the Power of Social • Providing a holistic view of consumer interactions, Oracle Social Relationship Management Suite includes: Oracle Social Network (OSN): Provides a secure collaboration platform that supports real-time collaboration and networking for users inside and outside the organization. Oracle Social Marketing: Enables marketers to centrally create, publish, moderate, manage, measure and report across multiple social campaigns and platforms. It also helps marketers publish social content, engage fans and customize their brand's look and feel. Oracle Social Engagement & Monitoring Cloud Service: Enables organizations to analyze social media interactions while also empowering customer service and sales teams to effectively engage with customers and prospects. It gives organizations the tools they need to understand customers and take the appropriate actions by monitoring, listening, learning, and responding to signals and trends across the social web. Oracle Social Sites: provides brands and agencies a powerful and rich editing experience that end users can leverage to dynamically develop and launch social sites. Oracle Data and Insights. A service that caters to a growing enterprise need for externally information by providing information, directory and insights about common business entities. Supporting Quote “By fundamentally changing the way organizations connect with their different stakeholders, social is changing the rules of business,” said Thomas Kurian, executive vice president, Oracle Product Development. “With the Oracle Social Relationship Management Suite we are empowering our customers to embrace this change by integrating the tools required to listen, engage, create, market and analyze social interactions into existing applications and services.”

    Read the article

  • Oracle OpenWorld 2012 - What's New

    - by Cinzia Mascanzoni
    Oracle OpenWorld 2012 is now over. Here a summary of major announcements on Hardware and Technology. Oracle Unveils Expanded Oracle Cloud Services Portfolio Oracle Unveils New Partner Cloud Programs Oracle Announces Latest Release of Oracle Exalogic Elastic Cloud Oracle Announces Oracle Exadata X3 Database In-Memory Machine Oracle Outlines Opportunity to Transform Industries from Device to Data Center with Embedded Java Oracle Announces Oracle Solaris 11.1 Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business New Release of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere

    Read the article

  • How to document requirements for an API systematically?

    - by Heinrich
    I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals. Let me give you an example for some informal requirements of Project A: When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user. It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine. The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs. ... In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable. I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal. I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don't think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go "one level deeper" ... Any advice/learning resources for me?

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • Amazon like Ecommerce site and Recommendation system

    - by Hellnar
    Hello, I am planning to implement a basic recommendation system that uses Facebook Connect or similar social networking site API's to connect a users profile, based on tags do an analyze and by using the results, generate item recommendations on my e-commerce site(works similar to Amazon). I do believe I need to divide parts into such: Fetching social networking data via API's.(Indeed user allows this) Analyze these data and generate tokes. By using information tokens, do item recommendations on my e-commerce site. Ie: I am a fan of "The Strokes" band on my Facebook account, system analyze this and recommending me "The Strokes Live" CD. For any part(fetching data, doing recommendation based on tags...), what algorithm and method would you recommend/ is used ? Thanks

    Read the article

  • Using Dependency Walker

    - by Valter Minute
    Dependency Walker is a very useful tool that can be used to find dependencies of a Portable Executable module. The PE format is used also on Windows CE and this means that Dependency Walker can be used to analyze also Windows CE/Windows Embedded Compact module. On Win32 it can be used also to monitor modules loaded by an application during runtime, this feature is not supported on CE. You can download dependency walker for free here: http://dependencywalker.com/. To analyze the dependencies of a Windows CE/Windows Embedded Compact 7 module you can just open it using Dependency Walker. If you want to check if a specific module can run on a Windows CE/Windows Compact 7 OS Image you can copy the executable in the same directory that contains your OS binaries (FLATRELEASEDIR). In this way Dependency Walker will highlight missing dlls or missing entry points inside existing dlls. Let’s do a quick sample. You need to check if myapp.exe (an application from a third party) can run on an image generated with your Test01 OSDesign. Copy Myapp.exe to the flat release directory of your OS Design. Launch depends.exe and use the File\Open option of its main menu to open the application executable file you just copied. You may receive an error if some of the modules required by your applications are missing. Before you analyze the module dependencies is important to configure Dependency Walker to check DLL in the same folder where your application file is stored. This is needed because some Windows CE DLLs have the same name of Win32 system DLLs but different entry points. To configure the DLL search path select “Options\Configure Module Search Order…” from Depenency Walker main menu. Select “The application directory” from the “Current Search Order” list, select it, and move it to the top of the list using the “Move Up” button. The system will ask to refresh the window contents to reflect your configuration change, click on “Yes” to proceed. Now you can inspect myapp.exe dependencies. Some DLLs are missing (XAMLRUNTIME.DLL and TILEENGINE.DLL) and OLE32.DLL exists but does not export the “CoInitialize” entry point that is required by myapp.exe. The bad news is that MyApp.exe will not run on your OS Image, the good news is that now you know what’s missing and you can add the required modules to your OS Design and fix the problem!

    Read the article

  • Make The Web Fast - The HAR Show: Capturing and Analyzing performance data with HTTP Archive format

    Make The Web Fast - The HAR Show: Capturing and Analyzing performance data with HTTP Archive format Need a flexible format to record, export, and analyze network performance data? Well, that's exactly what the HTTP Archive format (HAR) is designed to do! Even better, did you know that Chrome DevTools supports it? In this episode we'll take a deep dive into the format (as you'll see, its very simple), and explore the many different ways it can help you capture and analyze your sites performance. Join +Ilya Grigorik and +Peter Lubbers to find out how to capture HAR network traces in Chrome, visualize the data via an online tool, share the reports with your clients and coworkers, automate the logging and capture of HAR data for your build scripts, and even adapt it to server-side analysis use cases! Yes, a rapid fire session of awesome demos - see you there. From: GoogleDevelopers Views: 0 6 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Advisor Webcast: Remote Diagnostic Agent (RDA) Use with EPM/BI Applications

    - by THE
    Maurice Bauhan and Ian Bristow will run an Advisor Webcast on the use of RDA with the EPM / BI Applications. Learn how to install, run, and analyze outputs of Remote Diagnostic Agent. RDA is a free tool for Oracle customers that could save you time as you work with your subset of most Oracle software. This one-hour session presented by senior proactive support engineers is recommended for technical users and support contacts. The session will include information on: Download and install of Remote Diagnostic Agent Run RDA, narrowing data retrieval to the context of Oracle products you need to investigate Analyze the RDA program outputs Via My Oracle Support Help the engineers at Oracle and assist communities with what you learn There will be 2 sessions: 12/15/2011 - 09:00 GMT (10:00 CET) - register here ( note 1376286.1 )12/15/2011 - 16:00 GMT (17:00 CET) - register here ( note 1376323.1 ) an overview of all upcoming Advisor Webcasts can be found in note 740966.1 Find more information about Advisor Webcasts: All future Advisor Webcasts | All recorded Advisor Webcasts | Support specific recorded Webcasts

    Read the article

  • Does anyone know how to "tcpdump" traffic decrypted by Mallory MITM? [migrated]

    - by chriv
    I'm looking for some help in capturing network traffic that I can analyze in Wireshare (or other tools). The tool I'm using is mallory. If anyone is familiar with mallory, I could use some help. I've got it configured and running correctly, but I don't know how to get the output that I want. The setup is on my private network. I have a VM (running Ubuntu 12.04 - precise) with two NICs: eth0 is on my "real" network eth1 is only on my "fake" network, and is using dnsmasq (for DNS and DHCP for other devices on the "fake" network) Effectively eth0 is the "WAN" on my VM, and eth1 is the "LAN" on my VM. I've setup mallory and iptables to intercept, decrypt, encrypt and rewrite all traffic coming in on destination port 443 on eth1. On the device I want intercepted, I have imported the ca.cer that mallory generated as a trusted root certificate. I need to analyze some strange behavior in the HTTPS stream between the client and server, so that's why mallory is setup in between for this MITM. I would like to take the decrypted HTTPS traffic and dump it to either a logfile or a socket in a format compatible with tcpdump/wireshark (so I can collect it later and analyze it). Running tcpdump on eth1 is too soon (it's encrypted), and running tcpdump on eth2 is too late (it's been re-encrypted). Is there a way to make mallory "tcpdump" the decrypted traffic (in both directions)?

    Read the article

  • Per Application Packet Analyzer

    - by Anindya Chatterjee
    Is there any tool which can analyze network traffic per application? Wireshark does not have per application filtering, fiddler also does not give proper logging for any application. So can anyone please help me out to find an app which can analyze network traffic originating from a random application and log the traffic for that particular application only?

    Read the article

  • .net VS2008 compilation analyzer tool ?

    - by Matthieu
    Hi, I'm looking for a tool that allows me to analyze the compilation of a VS Solution (about 30 VS projects inside). I would like to know after the global solution compilation, which projets fail and forward errors to developers. Of course, I could analyze the compilation report... but I'm pretty sure that great tools are available ! What is for you the best one ? Thanks a lot !

    Read the article

  • what the best and simplest way to find out whether a volume need defrag?

    - by r9r9r9
    I am writing a application that monitor the system's health, user should know when they need to defrag the volumes. What I am thinking is calling the "defrag.exe /A" then analyze the output result to see whether it contains "You do not need to defragment this volume." But it's slow and very bad, I fount that the "Analyze" is really quick on the MyDefrag.exe. Anyone could tell me what's the best and simplest way?

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • SQL SERVER – Introduction to Big Data – Guest Post

    - by pinaldave
    BIG Data – such a big word – everybody talks about this now a days. It is the word in the database world. In one of the conversation I asked my friend Jasjeet Sigh the same question – what is Big Data? He instantly came up with a very effective write-up.  Jasjeet is working as a Technical Manager with Koenig Solutions. He leads the SQL domain, and holds rich IT industry experience. Talking about Koenig, it is a 19 year old IT training company that offers several certification choices. Some of its courses include SharePoint Training, Project Management certifications, Microsoft Trainings, Business Intelligence programs, Web Design and Development courses etc. Big Data, as the name suggests, is about data that is BIG in nature. The data is BIG in terms of size, and it is difficult to manage such enormous data with relational database management systems that are quite popular these days. Big Data is not just about being large in size, it is also about the variety of the data that differs in form or type. Some examples of Big Data are given below : Scientific data related to weather and atmosphere, Genetics etc Data collected by various medical procedures, such as Radiology, CT scan, MRI etc Data related to Global Positioning System Pictures and Videos Radio Frequency Data Data that may vary very rapidly like stock exchange information Apart from difficulties in managing and storing such data, it is difficult to query, analyze and visualize it. The characteristics of Big Data can be defined by four Vs: Volume: It simply means a large volume of data that may span Petabyte, Exabyte and so on. However it also depends organization to organization that what volume of data they consider as Big Data. Variety: As discussed above, Big Data is not limited to relational information or structured Data. It can also include unstructured data like pictures, videos, text, audio etc. Velocity:  Velocity means the speed by which data changes. The higher is the velocity, the more efficient should be the system to capture and analyze the data. Missing any important point may lead to wrong analysis or may even result in loss. Veracity: It has been recently added as the fourth V, and generally means truthfulness or adherence to the truth. In terms of Big Data, it is more of a challenge than a characteristic. It is difficult to ascertain the truth out of the enormous amount of data and the one that has high velocity. There are always chances of having un-precise and uncertain data. It is a challenging task to clean such data before it is analyzed. Big Data can be considered as the next big thing in the IT sector in terms of innovation and development. If appropriate technologies are developed to analyze and use the information, it can be the driving force for almost all industrial segments. These include Retail, Manufacturing, Service, Finance, Healthcare etc. This will help them to automate business decisions, increase productivity, and innovate and develop new products. Thanks Jasjeet Singh for an excellent write up.  Jasjeet Sign is working as a Technical Manager with Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Big Data

    Read the article

  • The Connected Company: WebCenter Portal - Feedback - Analytics and Polls

    - by Michael Snow
    Evernote Export body, td { }Guest Post by: Mitchell Palski, Staff Sales Consultant The importance of connecting peers has been widely recognized and socialized as a critical component of employee intranets. Organizations are striving to provide mediums for sharing knowledge and improving awareness across their enterprise. Indirectly, the socialization of your enterprise should lead to cost savings and improved product/service quality. However, many times the direct effects of connecting an organization’s leadership with its employees are overlooked. Oracle WebCenter Portal can help you bridge that gap by gathering implicit and explicit feedback. Implicit Feedback Through Usage Analytics Analytics allows administrators to track and analyze WebCenter Portal traffic and usage. Analytics provides the following basic functionality: Usage Tracking Metrics: Analytics collects and reports metrics of common WebCenter Portal functions, including community and portlet traffic. Behavior Tracking: Analytics can be used to analyze WebCenter Portal metrics to determine usage patterns, such as page visit duration and usage over time. User Profile Correlation: Analytics can be used to correlate metric information with user profile information. Usage tracking reports can be viewed and filtered by user profile data such as country, company or title. Usage analytics help measure how users interact with website content – allowing your IT staff and business analysts to make informed decisions when planning development for your next intranet enhancement. For example: If users are not accessing your Announcements page and missing critical information that they need to be aware of, you may elect to use graphical links on the home page to direct more users to that page. As a result, the number of employee help-requests to HR decreases. If users are not accessing your News page to read recent articles, you may elect to stop spending as much time updating the page with new stories and cut costs in your communications department. You notice that there is a high volume of users accessing the Employee Dashboard page so your organization decides to continue making personalization enhancements to the page and investing in the Portal tool that most users are accessing. Usage analytics aren’t necessarily a new concept in the IT industry. What sets WebCenter Portal Analytics apart is: Reports are tailored for WebCenter specific tools Report can be easily added to a page as simple as a drag-and-drop Explicit Feedback Through Polls WebCenter Portal users can create, edit, take, and analyze online polls. With polls, you can survey your audience (such as their opinions and their experience level), check whether they can recall important information, and gather feedback and metrics. How many times have you been involved in a requirements discussion and someone has asked a question similar to “Well how do you know that no one likes our home page?” and the response is “Everyone says they hate it! That’s all anyone complains about.” No one has any measurable, quantifiable metric to gauge user satisfaction. Analytics measure usage, but your organization also needs to measure the quality of your portal as defined by the actual people that use it. With that information, your leadership can make informed decisions that will not only match usage patterns but also relate to employees on a personal level. The end result is a connection between employees and leadership that gives everyone in the organization a sense of ownership of their Portal rather than the feeling of development decisions being segregated to leadership only. Polls can be created and edited through the Poll Manager: Polls and View Poll Results can easily be added to a page through drag-and-drop. What did we learn? Being a “connected” company doesn’t just mean helping employees connect with each other horizontally across your enterprise. It also means connecting those employees to the decisions that affect their everyday activities. Through WebCenter Portal Usage Analytics and Polls, any decision that is made to remove a Portal page, update a Portal page, or develop new Portal functionality, can be justified by quantifiable metrics. Instead of fielding complaints and hearing that your employees don’t have a voice, give those employees a voice and listen!

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >