Search Results

Search found 62870 results on 2515 pages for 'usage data'.

Page 9/2515 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • httpd high cpu usage slowing down server response

    - by max
    my client has a image sharing website with about 100.000 visitor per day it has been slowed down considerably since this morning when i checked processes i've notice high cpu usage from http .... some has suggested ddos attack ... i'm not a webmaster and i've no idea whts going on top top - 20:13:30 up 5:04, 4 users, load average: 4.56, 4.69, 4.59 Tasks: 284 total, 3 running, 281 sleeping, 0 stopped, 0 zombie Cpu(s): 12.1%us, 0.9%sy, 1.7%ni, 69.0%id, 16.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15875096k used, 162056k free, 360468k buffers Swap: 4194288k total, 888k used, 4193400k free, 14050008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4151 apache 20 0 277m 84m 3784 R 50.2 0.5 0:01.98 httpd 4115 apache 20 0 210m 16m 4480 S 18.3 0.1 0:00.60 httpd 12885 root 39 19 4296 692 308 S 13.0 0.0 11:09.53 gzip 4177 apache 20 0 214m 20m 3700 R 12.3 0.1 0:00.37 httpd 2219 mysql 20 0 4257m 198m 5668 S 11.0 1.3 42:49.70 mysqld 3691 apache 20 0 206m 14m 6416 S 1.7 0.1 0:03.38 httpd 3934 apache 20 0 211m 17m 4836 S 1.0 0.1 0:03.61 httpd 4098 apache 20 0 209m 17m 3912 S 1.0 0.1 0:04.17 httpd 4116 apache 20 0 211m 17m 4476 S 1.0 0.1 0:00.43 httpd 3867 apache 20 0 217m 23m 4672 S 0.7 0.1 1:03.87 httpd 4146 apache 20 0 209m 15m 3628 S 0.7 0.1 0:00.02 httpd 4149 apache 20 0 209m 15m 3616 S 0.7 0.1 0:00.02 httpd 12884 root 39 19 22336 2356 944 D 0.7 0.0 0:19.21 tar 4054 apache 20 0 206m 12m 4576 S 0.3 0.1 0:00.32 httpd another top top - 15:46:45 up 5:08, 4 users, load average: 5.02, 4.81, 4.64 Tasks: 288 total, 6 running, 281 sleeping, 0 stopped, 1 zombie Cpu(s): 18.4%us, 0.9%sy, 2.3%ni, 56.5%id, 21.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15792196k used, 244956k free, 360924k buffers Swap: 4194288k total, 888k used, 4193400k free, 13983368k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4622 apache 20 0 209m 16m 3868 S 54.2 0.1 0:03.99 httpd 4514 apache 20 0 213m 20m 3924 R 50.8 0.1 0:04.93 httpd 4627 apache 20 0 221m 27m 4560 R 18.9 0.2 0:01.20 httpd 12885 root 39 19 4296 692 308 S 18.9 0.0 11:51.79 gzip 2219 mysql 20 0 4257m 199m 5668 S 18.3 1.3 43:19.04 mysqld 4512 apache 20 0 227m 33m 4736 R 5.6 0.2 0:01.93 httpd 4520 apache 20 0 213m 19m 4640 S 1.3 0.1 0:01.48 httpd 4590 apache 20 0 212m 19m 3932 S 1.3 0.1 0:00.06 httpd 4573 apache 20 0 210m 16m 3556 R 1.0 0.1 0:00.03 httpd 4562 root 20 0 15164 1388 952 R 0.7 0.0 0:00.08 top 98 root 20 0 0 0 0 S 0.3 0.0 0:04.89 kswapd0 100 root 39 19 0 0 0 S 0.3 0.0 0:02.85 khugepaged 4579 apache 20 0 209m 16m 3900 S 0.3 0.1 0:00.83 httpd 4637 apache 20 0 209m 15m 3668 S 0.3 0.1 0:00.03 httpd ps aux [root@server ~]# ps aux | grep httpd root 2236 0.0 0.0 207524 10124 ? Ss 15:09 0:03 /usr/sbin/http d -k start -DSSL apache 3087 2.7 0.1 226968 28232 ? S 20:04 0:06 /usr/sbin/http d -k start -DSSL apache 3170 2.6 0.1 221296 22292 ? R 20:05 0:05 /usr/sbin/http d -k start -DSSL apache 3171 9.0 0.1 225044 26768 ? R 20:05 0:17 /usr/sbin/http d -k start -DSSL apache 3188 1.5 0.1 223644 24724 ? S 20:05 0:03 /usr/sbin/http d -k start -DSSL apache 3197 2.3 0.1 215908 17520 ? S 20:05 0:04 /usr/sbin/http d -k start -DSSL apache 3198 1.1 0.0 211700 13000 ? S 20:05 0:02 /usr/sbin/http d -k start -DSSL apache 3272 2.4 0.1 219960 21540 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3273 2.0 0.0 211600 12804 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3279 3.7 0.1 229024 29900 ? S 20:06 0:05 /usr/sbin/http d -k start -DSSL apache 3280 1.2 0.0 0 0 ? Z 20:06 0:01 [httpd] <defun ct> apache 3285 2.9 0.1 218532 21604 ? S 20:06 0:04 /usr/sbin/http d -k start -DSSL apache 3287 30.5 0.4 265084 65948 ? R 20:06 0:43 /usr/sbin/http d -k start -DSSL apache 3297 1.9 0.1 216068 17332 ? S 20:06 0:02 /usr/sbin/http d -k start -DSSL apache 3342 2.7 0.1 216716 17828 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3356 1.6 0.1 217244 18296 ? S 20:07 0:01 /usr/sbin/http d -k start -DSSL apache 3365 6.4 0.1 226044 27428 ? S 20:07 0:06 /usr/sbin/http d -k start -DSSL apache 3396 0.0 0.1 213844 16120 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3399 5.8 0.1 215664 16772 ? S 20:07 0:05 /usr/sbin/http d -k start -DSSL apache 3422 0.7 0.1 214860 17380 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3435 3.3 0.1 216220 17460 ? S 20:07 0:02 /usr/sbin/http d -k start -DSSL apache 3463 0.1 0.0 212732 15076 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3492 0.0 0.0 207660 7552 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3493 1.4 0.1 218092 19188 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3500 1.9 0.1 224204 26100 ? R 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3501 1.7 0.1 216916 17916 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3502 0.0 0.0 207796 7732 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3505 0.0 0.0 207660 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3529 0.0 0.0 207660 7524 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3531 4.0 0.1 216180 17280 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3532 0.0 0.0 207656 7464 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3543 1.4 0.1 217088 18648 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3544 0.0 0.0 207656 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3545 0.0 0.0 207656 7560 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3546 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3547 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3548 2.3 0.1 216904 17888 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3550 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3551 0.0 0.0 207660 7536 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3552 0.2 0.0 214104 15972 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3553 6.5 0.1 216740 17712 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3554 6.3 0.1 216156 17260 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3555 0.0 0.0 207796 7716 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3556 1.8 0.0 211588 12580 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3557 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3565 0.0 0.0 207660 7520 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3570 0.0 0.0 207660 7516 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3571 0.0 0.0 207660 7504 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL root 3577 0.0 0.0 103316 860 pts/2 S+ 20:08 0:00 grep httpd httpd error log [Mon Jul 01 18:53:38 2013] [error] [client 2.178.12.67] request failed: error reading the headers, referer: http://akstube.com/image/show/27023/%D9%86%DB%8C%D9%88%D8%B4%D8%A7-%D8%B6%DB%8C%D8%BA%D9%85%DB%8C-%D9%88-%D8%AE%D9%88%D8%A7%D9%87%D8%B1-%D9%88-%D9%87%D9%85%D8%B3%D8%B1%D8%B4 [Mon Jul 01 18:55:33 2013] [error] [client 91.229.215.240] request failed: error reading the headers, referer: http://akstube.com/image/show/44924 [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] Invalid method in request [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] File does not exist: /var/www/html/501.shtml [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] client denied by server configuration: /var/www/html/server-status [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] File does not exist: /var/www/html/403.shtml [Mon Jul 01 19:23:57 2013] [error] [client 151.242.14.31] request failed: error reading the headers [Mon Jul 01 19:37:16 2013] [error] [client 2.190.16.65] request failed: error reading the headers [Mon Jul 01 19:56:00 2013] [error] [client 151.242.14.31] request failed: error reading the headers Not a JPEG file: starts with 0x89 0x50 also there is lots of these in the messages log Jul 1 20:15:47 server named[2426]: client 203.88.6.9#11926: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:47 server named[2426]: client 203.88.6.9#26255: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#20093: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#8672: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:07 server named[2426]: client 203.88.6.9#39352: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#25382: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#9064: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#35375: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#61932: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#4423: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#40229: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#46128: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#62128: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#35240: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#36774: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#28361: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#14970: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20216: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#31794: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#23042: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#11333: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#41807: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20092: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#43526: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#17173: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#62412: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#63961: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#64345: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#31030: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17098: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17197: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#18114: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#59138: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:17 server named[2426]: client 203.88.6.9#28715: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:48:33 server named[2426]: client 203.88.23.9#26355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#34473: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#62658: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#51631: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:35 server named[2426]: client 203.88.23.9#54701: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#63694: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#18203: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:37 server named[2426]: client 203.88.6.10#9029: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#58981: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#29321: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:47 server named[2426]: client 119.160.127.42#42355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:49 server named[2426]: client 119.160.120.42#46285: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:53 server named[2426]: client 119.160.120.42#30696: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:54 server named[2426]: client 119.160.127.42#14038: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:55 server named[2426]: client 119.160.120.42#33586: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:56 server named[2426]: client 119.160.127.42#55114: query (cache) 'xxxmaza.com/A/IN' denied

    Read the article

  • Address Regulatory Mandates for Data Encryption Without Changing Your Applications

    - by Troy Kitch
    The Payment Card Industry Data Security Standard, US state-level data breach laws, and numerous data privacy regulations worldwide all call for data encryption to protect personally identifiable information (PII). However encrypting PII data in applications requires costly and complex application changes. Fortunately, since this data typically resides in the application database, using Oracle Advanced Security, PII can be encrypted transparently by the Oracle database without any application changes. In this ISACA webinar, learn how Oracle Advanced Security offers complete encryption for data at rest, in transit, and on backups, along with built-in key management to help organizations meet regulatory requirements and save money. You will also hear from TransUnion Interactive, the consumer subsidiary of TransUnion, a global leader in credit and information management, which maintains credit histories on an estimated 500 million consumers across the globe, about how they addressed PCI DSS encryption requirements using Oracle Database 11g with Oracle Advanced Security. Register to watch the webinar now.

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • Oracle - A Leader in Gartner's MQ for Master Data Management for Customer Data

    - by Mala Narasimharajan
      The Gartner MQ report for Master Data Management of Customer Data Solutions is released and we're proud to say that Oracle is in the leaders' quadrant.  Here's a snippet from the report itself:  " “Oracle has a strong, though complex, portfolio of domain-specific MDM products that include prepackaged data models. Gartner estimates that Oracle now has over 1,500 licensed MDM customers, including 650 customers managing customer data. The MDM portfolio includes three products that address MDM of customer data solution needs: Oracle Fusion Customer Hub (FCH), Oracle CDH and Oracle Siebel UCM. These three MDM products are positioned for different segments of the market and Oracle is progressively moving all three products onto a common MDM technology platform..." (Gartner, Oct 18, 2012)  For more information on Oracle's solutions for customer data in Master Data Management, click here.  

    Read the article

  • Almost Realtime Data and Web application

    - by Chris G.
    I have a computer that is recording 100 different data points into an OPC server. I've written a simple OPC client that can read all of this data. I have a front-end website on a different network that I would like to consume this data. I could easily set the OPC client to send the data to a SQL server and the website could read from it, but that would be a lot of writes. If I wanted the data to be updated every 10 seconds I'd be writing to the database every 10 seconds. (I could probably just serialize the 100 points to get 1 write / 10 seconds but that would also limit my ability to search the data later). This solution wouldn't scale very well. If I had 100 of these computers the situation would quickly grow out of hand. Obviously I am well out of my league here and I have no experience with working with a large amount of data like this. What are my options and what should I research?

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • SQL SERVER – Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

    - by pinaldave
    Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything! This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.” This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules. I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version. MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence. MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information. Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading. In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Master Data Services, MDM

    Read the article

  • Data breakpoints to find points where data gets broken

    - by raccoon_tim
    When working with a large code base, finding reasons for bizarre bugs can often be like finding a needle in a hay stack. Finding out why an object gets corrupted without no apparent reason can be quite daunting, especially when it seems to happen randomly and totally out of context. Scenario Take the following scenario as an example. You have defined the a class that contains an array of characters that is 256 characters long. You now implement a method for filling this buffer with a string passed as an argument. At this point you mistakenly expect the buffer to be 256 characters long. At some point you notice that you require another character buffer and you add that after the previous one in the class definition. You now figure that you don’t need the 256 characters that the first member can hold and you shorten that to 128 to conserve space. At this point you should start thinking that you also have to modify the method defined above to safeguard against buffer overflow. It so happens, however, that in this not so perfect world this does not cross your mind. Buffer overflow is one of the most frequent sources for errors in a piece of software and often one of the most difficult ones to detect, especially when data is read from an outside source. Many mass copy functions provided by the C run-time provide versions that have boundary checking (defined with the _s suffix) but they can not guard against hard coded buffer lengths that at some point get changed. Finding the bug Getting back to the scenario, you’re now wondering why does the second string get modified with data that makes no sense at all. Luckily, Visual Studio provides you with a tool to help you with finding just these kinds of errors. It’s called data breakpoints. To add a data breakpoint, you first run your application in debug mode or attach to it in the usual way, and then go to Debug, select New Breakpoint and New Data Breakpoint. In the popup that opens, you can type in the memory address and the amount of bytes you wish to monitor. You can also use an expression here, but it’s often difficult to come up with an expression for data in an object allocated on the heap when not in the context of a certain stack frame. There are a couple of things to note about data breakpoints, however. First of all, Visual Studio supports a maximum of four data breakpoints at any given time. Another important thing to notice is that some C run-time functions modify memory in kernel space which does not trigger the data breakpoint. For instance, calling ReadFile on a buffer that is monitored by a data breakpoint will not trigger the breakpoint. The application will now break at the address you specified it to. Often you might immediately spot the issue but the very least this feature can do is point you in the right direction in search for the real reason why the memory gets inadvertently modified. Conclusions Data breakpoints are a great feature, especially when doing a lot of low level operations where multiple locations modify the same data. With the exception of some special cases, like kernel memory modification, you can use it whenever you need to check when memory at a certain location gets changed on purpose or inadvertently.

    Read the article

  • Metro: Declarative Data Binding

    - by Stephen.Walther
    The goal of this blog post is to describe how declarative data binding works in the WinJS library. In particular, you learn how to use both the data-win-bind and data-win-bindsource attributes. You also learn how to use calculated properties and converters to format the value of a property automatically when performing data binding. By taking advantage of WinJS data binding, you can use the Model-View-ViewModel (MVVM) pattern when building Metro style applications with JavaScript. By using the MVVM pattern, you can prevent your JavaScript code from spinning into chaos. The MVVM pattern provides you with a standard pattern for organizing your JavaScript code which results in a more maintainable application. Using Declarative Bindings You can use the data-win-bind attribute with any HTML element in a page. The data-win-bind attribute enables you to bind (associate) an attribute of an HTML element to the value of a property. Imagine, for example, that you want to create a product details page. You want to show a product object in a page. In that case, you can create the following HTML page to display the product details: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Product Details</h1> <div class="field"> Product Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Product Price: <span data-win-bind="innerText:price"></span> </div> <div class="field"> Product Picture: <br /> <img data-win-bind="src:photo;alt:name" /> </div> </body> </html> The HTML page above contains three data-win-bind attributes – one attribute for each product property displayed. You use the data-win-bind attribute to set properties of the HTML element associated with the data-win-attribute. The data-win-bind attribute takes a semicolon delimited list of element property names and data source property names: data-win-bind=”elementPropertyName:datasourcePropertyName; elementPropertyName:datasourcePropertyName;…” In the HTML page above, the first two data-win-bind attributes are used to set the values of the innerText property of the SPAN elements. The last data-win-bind attribute is used to set the values of the IMG element’s src and alt attributes. By the way, using data-win-bind attributes is perfectly valid HTML5. The HTML5 standard enables you to add custom attributes to an HTML document just as long as the custom attributes start with the prefix data-. So you can add custom attributes to an HTML5 document with names like data-stephen, data-funky, or data-rover-dog-is-hungry and your document will validate. The product object displayed in the page above with the data-win-bind attributes is created in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var product = { name: "Tesla", price: 80000, photo: "/images/TeslaPhoto.png" }; WinJS.Binding.processAll(null, product); } }; app.start(); })(); In the code above, a product object is created with a name, price, and photo property. The WinJS.Binding.processAll() method is called to perform the actual binding (Don’t confuse WinJS.Binding.processAll() and WinJS.UI.processAll() – these are different methods). The first parameter passed to the processAll() method represents the root element for the binding. In other words, binding happens on this element and its child elements. If you provide the value null, then binding happens on the entire body of the document (document.body). The second parameter represents the data context. This is the object that has the properties which are displayed with the data-win-bind attributes. In the code above, the product object is passed as the data context parameter. Another word for data context is view model.  Creating Complex View Models In the previous section, we used the data-win-bind attribute to display the properties of a simple object: a single product. However, you can use binding with more complex view models including view models which represent multiple objects. For example, the view model in the following default.js file represents both a customer and a product object. Furthermore, the customer object has a nested address object: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone", address: { street: "1 Rocky Way", city: "Bedrock", country: "USA" } }, product: { name: "Bowling Ball", price: 34.55 } }; WinJS.Binding.processAll(null, viewModel); } }; app.start(); })(); The following page displays the customer (including the customer address) and the product. Notice that you can use dot notation to refer to child objects in a view model such as customer.address.street. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:customer.firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:customer.lastName"></span> </div> <div class="field"> Address: <address> <span data-win-bind="innerText:customer.address.street"></span> <br /> <span data-win-bind="innerText:customer.address.city"></span> <br /> <span data-win-bind="innerText:customer.address.country"></span> </address> </div> <h1>Product</h1> <div class="field"> Name: <span data-win-bind="innerText:product.name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:product.price"></span> </div> </body> </html> A view model can be as complicated as you need and you can bind the view model to a view (an HTML document) by using declarative bindings. Creating Calculated Properties You might want to modify a property before displaying the property. For example, you might want to format the product price property before displaying the property. You don’t want to display the raw product price “80000”. Instead, you want to display the formatted price “$80,000”. You also might need to combine multiple properties. For example, you might need to display the customer full name by combining the values of the customer first and last name properties. In these situations, it is tempting to call a function when performing binding. For example, you could create a function named fullName() which concatenates the customer first and last name. Unfortunately, the WinJS library does not support the following syntax: <span data-win-bind=”innerText:fullName()”></span> Instead, in these situations, you should create a new property in your view model that has a getter. For example, the customer object in the following default.js file includes a property named fullName which combines the values of the firstName and lastName properties: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", get fullName() { return this.firstName + " " + this.lastName; } }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); The customer object has a firstName, lastName, and fullName property. Notice that the fullName property is defined with a getter function. When you read the fullName property, the values of the firstName and lastName properties are concatenated and returned. The following HTML page displays the fullName property in an H1 element. You can use the fullName property in a data-win-bind attribute in exactly the same way as any other property. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1 data-win-bind="innerText:fullName"></h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </body> </html> Creating a Converter In the previous section, you learned how to format the value of a property by creating a property with a getter. This approach makes sense when the formatting logic is specific to a particular view model. If, on the other hand, you need to perform the same type of formatting for multiple view models then it makes more sense to create a converter function. A converter function is a function which you can apply whenever you are using the data-win-bind attribute. Imagine, for example, that you want to create a general function for displaying dates. You always want to display dates using a short format such as 12/25/1988. The following JavaScript file – named converters.js – contains a shortDate() converter: (function (WinJS) { var shortDate = WinJS.Binding.converter(function (date) { return date.getMonth() + 1 + "/" + date.getDate() + "/" + date.getFullYear(); }); // Export shortDate WinJS.Namespace.define("MyApp.Converters", { shortDate: shortDate }); })(WinJS); The file above uses the Module Pattern, a pattern which is used through the WinJS library. To learn more about the Module Pattern, see my blog entry on namespaces and modules: http://stephenwalther.com/blog/archive/2012/02/22/windows-web-applications-namespaces-and-modules.aspx The file contains the definition for a converter function named shortDate(). This function converts a JavaScript date object into a short date string such as 12/1/1988. The converter function is created with the help of the WinJS.Binding.converter() method. This method takes a normal function and converts it into a converter function. Finally, the shortDate() converter is added to the MyApp.Converters namespace. You can call the shortDate() function by calling MyApp.Converters.shortDate(). The default.js file contains the customer object that we want to bind. Notice that the customer object has a firstName, lastName, and birthday property. We will use our new shortDate() converter when displaying the customer birthday property: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", birthday: new Date("12/1/1988") }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); We actually use our shortDate converter in the HTML document. The following HTML document displays all of the customer properties: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script type="text/javascript" src="js/converters.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> <div class="field"> Birthday: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> </div> </body> </html> Notice the data-win-bind attribute used to display the birthday property. It looks like this: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> The shortDate converter is applied to the birthday property when the birthday property is bound to the SPAN element’s innerText property. Using data-win-bindsource Normally, you pass the view model (the data context) which you want to use with the data-win-bind attributes in a page by passing the view model to the WinJS.Binding.processAll() method like this: WinJS.Binding.processAll(null, viewModel); As an alternative, you can specify the view model declaratively in your markup by using the data-win-datasource attribute. For example, the following default.js script exposes a view model with the fully-qualified name of MyWinWebApp.viewModel: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { // Create view model var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone" }, product: { name: "Bowling Ball", price: 12.99 } }; // Export view model to be seen by universe WinJS.Namespace.define("MyWinWebApp", { viewModel: viewModel }); // Process data-win-bind attributes WinJS.Binding.processAll(); } }; app.start(); })(); In the code above, a view model which represents a customer and a product is exposed as MyWinWebApp.viewModel. The following HTML page illustrates how you can use the data-win-bindsource attribute to bind to this view model: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div data-win-bindsource="MyWinWebApp.viewModel.customer"> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </div> <h1>Product</h1> <div data-win-bindsource="MyWinWebApp.viewModel.product"> <div class="field"> Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:price"></span> </div> </div> </body> </html> The data-win-bindsource attribute is used twice in the page above: it is used with the DIV element which contains the customer details and it is used with the DIV element which contains the product details. If an element has a data-win-bindsource attribute then all of the child elements of that element are affected. The data-win-bind attributes of all of the child elements are bound to the data source represented by the data-win-bindsource attribute. Summary The focus of this blog entry was data binding using the WinJS library. You learned how to use the data-win-bind attribute to bind the properties of an HTML element to a view model. We also discussed several advanced features of data binding. We examined how to create calculated properties by including a property with a getter in your view model. We also discussed how you can create a converter function to format the value of a view model property when binding the property. Finally, you learned how to use the data-win-bindsource attribute to specify a view model declaratively.

    Read the article

  • SQL Server Management Data Warehouse - quick tour on setting health monitoring policies

    - by ssqa.net
    Profiler, Perfmon, DMVs & scripts are legendary tools for a DBA to monitor the SQL arena. In line with these tools SQL Server 2008 throws a powerful stream with policy based management (PBM) framework & management data warehouse (MDW) methods, which is a relational database that contains the data that is collected from a server that is a data collection target. This data is used to generate the reports for the System Data collection sets, and can also be used to create custom reports. .....(read more)

    Read the article

  • Data Flow Diagrams - Difference between Lines and Arrows

    - by Howdy_McGee
    I'm currently working with Visio to create Data Flow Diagrams for a System Analysis and Design class but I'm unsure what the difference between ------ and ------> is. I can connect 2 shapes together with a line (process, entity, data store) but does the single line connecting the two mean data flow? Do I need to explicitly use the data flow arrow to show which way data is flowing? (There doesn't seem to be tags for this topic, maybe im in the wrong place?)

    Read the article

  • Big Data Videos

    - by Jean-Pierre Dijcks
    You can view them all on YouTube using the following links: Overview for the Boss: http://youtu.be/ikJyrmKdJWc Hadoop: http://youtu.be/acWtid-OOWM Acquiring Big Data: http://youtu.be/TfuhuA_uaho Organizing Big Data: http://youtu.be/IC6jVRO2Hq4 Analyzing Big Data: http://youtu.be/2yf_jrBhz5w These videos are a great place to start learning about big data, the value it can bring to your organisation and how Oracle can help you start working with big data today.

    Read the article

  • SQL Server and the XML Data Type : Data Manipulation

    The introduction of the xml data type, with its own set of methods for processing xml data, made it possible for SQL Server developers to create columns and variables of the type xml. Deanna Dicken examines the modify() method, which provides for data manipulation of the XML data stored in the xml data type via XML DML statements.

    Read the article

  • How to restore/change Alt+Tab behaviour/ram usage and a few other things after Ubuntu upgrade from 11.04 to 11.10?

    - by fiktor
    I use Ubuntu for programming. I recently updated it from 11.04 to 11.10. There are some things I don't like in the new version of Unity desktop interface. I don't actually know if it is hard to restore previous behavior or not, and if it is not, where should I look to do that. I know a bit of programming, but I really don't know much about Linux settings. I used to have 3-6 terminal windows and switch between them with Alt+Tab and Shift+Alt+Tab. I liked half-transparent terminal windows, since with them I could open web-page with some instruction in Firefox, press Alt+Tab and type commands in a console window, being able to recognize text on a web-page under it. Now I have problems with my usual work-style because of the following. List of "negative" changes Alt+Tab shows just one icon for all console windows. When I wait some time, it, however, shows all windows, but I don't like to wait. I prefer to remember order of windows and press Alt+Tab as many times as I need to switch to the right window. Alt+Shift+Tab to switch in reverse order doesn't work now. Console windows are not transparent any more. When I don't wait, and switch to this icon, it shows all console windows altogether. So even if they were transparent, I wouldn't be able to see anything below them (I can read something only from the window, which is directly under current one, not a few levels under). When I run a few console windows in Unity I had 740Mb used on Ubuntu 11.04, but I have 1050Mb now. The question is how to make it back to 750-. I really need my memory, since I use my computer to work with 1512Mb of data and I try to save every 10Mb possible (if it doesn't take too much of machine and, more importantly, my time). When I press "The Super key" I have a field to type the name of the program I want to run. But now it sometimes shows this field, but when I'm trying to type nothing happens. Probably, focus is not on the right field. I don't really mean to restore exactly the same behavior, but I want to make my work in Ubuntu 11.10 efficient (at least as efficient as in Ubuntu 11.04). I would be happy if there are some ways to accomplish that. What have I tried I have installed CompizConfig Settings Manager. I have read this question. However enabling "Static Application Switcher" makes Alt+Tab crazy: after enabling it It says about key-binding conflicts with "Ubuntu Unity Plugin"; "Alt+Tab" switching doesn't change, but "Shift+Alt+Tab" now works and shows all windows; Memory usage increases. I have tried turning off Ubuntu Unity Plugin, but this doesn't seem right thing to do, since it seems to turn off all menus, a lot of keystrokes and app-launcher, which usually activates with "The Super key". I have found, that window transparency can be enabled by "Opacity, Brightness and Saturation" plugin from Accessibility. However I don't know if enabling it is the right thing to do (at least it increases memory usage). Update: everything solved but #3: see my own answer below. I have made a separate question about issue #3 (transparency).

    Read the article

  • Behavior of <- NULL on lists versus data.frames for removing data

    - by Ananda Mahto
    Many R users eventually figure out lots of ways to remove elements from their data. One way is to use NULL, particularly when you want to do something like drop a column from a data.frame or drop an element from a list. Eventually, a user comes across a situation where they want to drop several columns from a data.frame at once, and they hit upon <- list(NULL) as the solution (since using <- NULL will result in an error). A data.frame is a special type of list, so it wouldn't be too tough to imagine that the approaches for removing items from a list should be the same as removing columns from a data.frame. However, they produce different results, as can be seen in the example below. ## Make some small data--two data.frames and two lists cars1 <- cars2 <- head(mtcars)[1:4] cars3 <- cars4 <- as.list(cars2) ## Demonstration that the `list(NULL)` approach works cars1[c("mpg", "cyl")] <- list(NULL) cars1 # disp hp # Mazda RX4 160 110 # Mazda RX4 Wag 160 110 # Datsun 710 108 93 # Hornet 4 Drive 258 110 # Hornet Sportabout 360 175 # Valiant 225 105 ## Demonstration that simply using `NULL` does not work cars2[c("mpg", "cyl")] <- NULL # Error in `[<-.data.frame`(`*tmp*`, c("mpg", "cyl"), value = NULL) : # replacement has 0 items, need 12 Switch to applying the same concept to a list, and compare the difference in behavior. ## Does not fully drop the items, but sets them to `NULL` cars3[c("mpg", "cyl")] <- list(NULL) # $mpg # NULL # # $cyl # NULL # # $disp # [1] 160 160 108 258 360 225 # # $hp # [1] 110 110 93 110 175 105 ## *Does* drop the `list` items while this would ## have produced an error with a `data.frame` cars4[c("mpg", "cyl")] <- NULL # $disp # [1] 160 160 108 258 360 225 # # $hp # [1] 110 110 93 110 175 105 The main questions I have are, if a data.frame is a list, why does it behave so differently in this scenario? Is there a foolproof way of knowing when an element will be dropped, when it will produce an error, and when it will simply be given a NULL value? Or do we depend on trial-and-error for this?

    Read the article

  • How does jQuery stores data with .data()?

    - by TK
    I am a little confused how jQuery stores data with .data() functions. Is this something called expando? Or is this using HTML5 Web Storage although I think this is very unlikely? The documentation says: The .data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. As I read about expando, it seems to have a rick of memory leak. Unfortunately my skills are not enough to read and understand jQuery code itself, but I want to know how jQuery stores such data by using data(). http://api.jquery.com/data/

    Read the article

  • ASP.Net Layered app - Share Entity Data Model amongst layers

    - by Chris Klepeis
    How can I share the auto-generated entity data model (generated object classes) amongst all layers of my C# web app whilst only granting query access in the data layer? This uses the typical 3 layer approach: data, business, presentation. My data layer returns an IEnumerable<T> to my business layer, but I cannot return type T to the presentation layer because I do not want the presentation layer to know of the existence of the data layer - which is where the entity framework auto-generated my classes. It was recommended to have a seperate layer with just the data model, but I'm unsure how to seperate the data model from the query functionality the entity framework provides.

    Read the article

  • How does jQuery store data with .data()?

    - by TK
    I am a little confused how jQuery stores data with .data() functions. Is this something called expando? Or is this using HTML5 Web Storage although I think this is very unlikely? The documentation says: The .data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. As I read about expando, it seems to have a rick of memory leak. Unfortunately my skills are not enough to read and understand jQuery code itself, but I want to know how jQuery stores such data by using data(). http://api.jquery.com/data/

    Read the article

  • taskmgr.exe 100% CPU Usage

    - by Burnsys
    Hi. I have 2 IBM servers Intel Xeon Dual core with 2gb RAM. the problem is that Taskmanager uses one full core when i open it. The same happens when i copy files in the explorer. OS: Windows 2003 Server Things i tried: Installed all updates They has kaspersky anti virus and they previously had Nod32. All drivers installed OK. All unused devices are disabled in the bios. Reinstalled win 2003 SP2. No conflict in drivers Tried opening via remote desktop and the problem continues. The cpu utilization is in the Kernel Times (Red in taskmanager) If i open Proces Explorer and i navigate to the threads consuming CPU the stack traces ends always in "NtkrnlPA!UnexpectedInterrupt", all threads stacks end in "UnexpectedInterrupt" ntoskrnl.exe!KiUnexpectedInterrupt+0x48 ntoskrnl.exe!KeWaitForMutexObject+0x20e ntoskrnl.exe!CcSetReadAheadGranularity+0x1ff9 ntoskrnl.exe!IoAllocateIrp+0x3fd ntoskrnl.exe!KeWaitForMutexObject+0x20e ntoskrnl.exe!NtWaitForSingleObject+0x94 ntoskrnl.exe!DbgBreakPointWithStatus+0xe05 ntdll.dll!KiFastSystemCallRet kernel32.dll!WaitForSingleObject+0x12 taskmgr.exe+0xeef6 kernel32.dll!GetModuleHandleA+0xdf Any help would be appreciated!

    Read the article

  • 32bit vs 64bit guest VM and RAM usage

    - by sims
    Why does a 32bit domU (Xen guest VM) use less RAM than a 64bit? Notes: The same software complied for a different arch(AMD64 vs. 686). Obviously this is Linux or BSD or something easily ported. Maybe this is also a good one for SO. I've read this is so. I can guess why, but I'd like to hear everyone's comments.

    Read the article

  • SQL 2008 Memory Usage

    - by Danilo Brambilla
    I have a SQL Server 2008 (ver 10.0.1600) running on a Windows Server 2008 R2 Enterprise server with 8 GB of physical ram. If I open Task Manager I can see on 'Physical Memory' section of 'Performance' tab that only 340 MB are Available of 8191 Total, but I can't see any process using such amount of memory. Please note SQL Server is memory limited to 6GB (Maximum Server Memory = 6000). If I open Sysinternals Process Explorer, I can see sqlsrvr.exe process has: Private Bytes: 227.000 K Working Set: 140.000 K Virtual Size: 8.762.000 K What does this means? Is there any way to free up this memory for other process? Why Virtual Size figure as allocated memory? I thought that Virtual Size was 'reserved memory' only.

    Read the article

  • SQL 2008 Memory Usage

    - by Danilo Brambilla
    I have a SQL Server 2008 (ver 10.0.1600) running on a Windows Server 2008 R2 Enterprise server with 8 GB of physical ram. If I open Task Manager I can see on 'Physical Memory' section of 'Performance' tab that only 340 MB are Available of 8191 Total, but I can't see any process using such amount of memory. Please note SQL Server is memory limited to 6GB (Maximum Server Memory = 6000). If I open Sysinternals Process Explorer, I can see sqlsrvr.exe process has: Private Bytes: 227.000 K Working Set: 140.000 K Virtual Size: 8.762.000 K What does this means? Is there any way to free up this memory for other process? Why Virtual Size figure as allocated memory? I thought that Virtual Size was 'reserved memory' only.

    Read the article

  • Bugzilla random lag and CPU usage

    - by Serty Oan
    Here is the configuration : Bugzilla (3.4.2) application running on a Ubuntu server 8.04 with Perl 5.8.8. Here is the problem : Sometimes (randomly), pages take very long to load. It can be any of the login page or query.cgi page or buglist.cgi... etc... Using top on the server, I tried to see what was wrong, and saw that sometimes a bugzilla script would use 30% memory and last pretty long (1 to 5 seconds) and sometimes not even show in the list because it responded too quickly or just use 2%. The mysqld process does not seems to be the problem.

    Read the article

  • How to debug high memory usage by registry?

    - by bkr
    I have a windows 2008 r2 server running ADFS2 and some web apps that is having issues with low memory. Digging around I found that the 5.5 GB were being used under Kernel Memory (paged). Further digging with Poolmon, I discovered that the majority of that (5GB+) was being used by CM - configuration manager. Also known as the registry. I'm really now sure how to tell why the registry is using so much memory however, or how to release it? Looking at the physical registry files they don't appear that large. EDIT #1 Using the powershell script @ http://jdhitsolutions.com/blog/2011/05/get-registry-size-and-age/ confirms what I saw looking at the physical registry files, that they're relatively small Computername : (removed) Status : OK CurrentSize : 67 MaximumSize : 2048 FreeSize : 1981 PercentFree : 96.728515625 Created : 4/1/2011 11:38:02 AM Age : 454.23:41:28.2540682

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >