Search Results

Search found 2911 results on 117 pages for 'numerical analysis'.

Page 9/117 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Algorithm to generate numerical concept hierarchy

    - by Christophe Herreman
    I have a couple of numerical datasets that I need to create a concept hierarchy for. For now, I have been doing this manually by observing the data (and a corresponding linechart). Based on my intuition, I created some acceptable hierarchies. This seems like a task that can be automated. Does anyone know if there is an algorithm to generate a concept hierarchy for numerical data? To give an example, I have the following dataset: Bangladesh 521 Brazil 8295 Burma 446 China 3259 Congo 2952 Egypt 2162 Ethiopia 333 France 46037 Germany 44729 India 1017 Indonesia 2239 Iran 4600 Italy 38996 Japan 38457 Mexico 10200 Nigeria 1401 Pakistan 1022 Philippines 1845 Russia 11807 South Africa 5685 Thailand 4116 Turkey 10479 UK 43734 US 47440 Vietnam 1042 for which I created the following hierarchy: LOWEST ( < 1000) LOW (1000 - 2500) MEDIUM (2501 - 7500) HIGH (7501 - 30000) HIGHEST ( 30000)

    Read the article

  • silverlight for .NET / CLR based numerical computing on osx

    - by Jonathan Shore
    I'm interested in using F# for numerical work, but my platforms are not windows based. Mono still has a significant performance penalty for programs that generate a significant amount of short-lived objects (as would be typical for functional languages). Silverlight is available on OSX. I had seen some reference indicating that assemblies compiled in the usual way could not be referenced, but not clear on the details. I'm not interested in UIs, but wondering whether could use the VM bundled with silverlight effectively for execution? I would want to be able to reference a large library of numerical models I already have in java (cross-compiled via IKVM to .NET assemblies) and a new codebase written in F#. My hope would be that the silverlight VM on OSX has good performance and can reference external assemblies and native libraries. Is this doable?

    Read the article

  • Function lfit in numerical recipes, providing a test function

    - by Simon Walker
    Hi I am trying to fit collected data to a polynomial equation and I found the lfit function from Numerical Recipes. I only have access to the second edition, so am using that. I have read about the lfit function and its parameters, one of which is a function pointer, given in the documentation as void (*funcs)(float, float [], int)) with the help The user supplies a routine funcs(x,afunc,ma) that returns the ma basis functions evaluated at x = x in the array afunc[1..ma]. I am struggling to understand how this lfit function works. An example function I found is given below: void fpoly(float x, float p[], int np) /*Fitting routine for a polynomial of degree np-1, with coe?cients in the array p[1..np].*/ { int j; p[1]=1.0; for (j=2;j<=np;j++) p[j]=p[j-1]*x; } When I run through the source code for the lfit function in gdb I can see no reference to the funcs pointer. When I try and fit a simple data set with the function, I get the following error message. Numerical Recipes run-time error... gaussj: Singular Matrix ...now exiting to system... Clearly somehow a matrix is getting defined with all zeroes. I am going to involve this function fitting in a large loop so using another language is not really an option. Hence why I am planning on using C/C++. For reference, the test program is given here: int main() { float x[5] = {0., 0., 1., 2., 3.}; float y[5] = {0., 0., 1.2, 3.9, 7.5}; float sig[5] = {1., 1., 1., 1., 1.}; int ndat = 4; int ma = 4; /* parameters in equation */ float a[5] = {1, 1, 1, 0.1, 1.5}; int ia[5] = {1, 1, 1, 1, 1}; float **covar = matrix(1, ma, 1, ma); float chisq = 0; lfit(x,y,sig,ndat,a,ia,ma,covar,&chisq,fpoly); printf("%f\n", chisq); free_matrix(covar, 1, ma, 1, ma); return 0; } Also confusing the issue, all the Numerical Recipes functions are 1 array-indexed so if anyone has corrections to my array declarations let me know also! Cheers

    Read the article

  • Return numerical array in python

    - by khan
    Okay..this is kind of an interesting question. I have a php form through which user enters values for x and y like this: X: [1,3,4] Y: [2,4,5] These values are stored into database as varchars. From there, these are called by a python program which is supposed to use them as numerical (numpy) arrays. However, these are called as plain strings, which means that calculation can not be performed over them. Is there a way to convert them into numerical arrays before processing or is there something else which is wrong? Helpp!!

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • Agile social media analysis and implementation

    - by blunders
    Are there any books/platforms for social media campaign planning and implementation that define a completely agile approach to engaging audiences on platforms such as Facebook, Linkedin, Twitter, etc? UPDATE: Posted a bounty on the question since the current answer is really not about agile approaches to social media campaign planning and implementation. UPDATE 2: The question is asking for an agile social media approach, or a social media platform that has agile social media approach baked-in. If the question was about an agile approach to software development, SCRUM would be the most likely answer (70% percent of agile software developers say they practice some from of SCRUM), and Pivotal Tracker might be one of many agile platforms suggested; as a generalization Pivotal Tracker might be called a project management platform. On the flip-side, suggesting just a social media platform might be the equivalent of suggesting a project management platform, and suggesting I see if SCRUM works on it. Problem is that if you haven't suggested an agile social media approach to try on this social media platform, then you haven't provided an answer to the question.

    Read the article

  • Adding Actions to a Cube in SQL Server Analysis Services 2008

    Actions are powerful way of extending the value of SSAS cubes for the end user. They can click on a cube or portion of a cube to start an application with the selected item as a parameter, or to retrieve information about the selected item. Actions haven't been well-documented until now; Robert Sheldon once more makes everything clear.

    Read the article

  • Adding Actions to a Cube in SQL Server Analysis Services 2008

    Actions are powerful way of extending the value of SSAS cubes for the end user. They can click on a cube or portion of a cube to start an application with the selected item as a parameter, or to retrieve information about the selected item. Actions haven't been well-documented until now; Robert Sheldon once more makes everything clear.

    Read the article

  • Requirment Analysis Communication

    - by Rahul Mehta
    Hi, Someday s ago we was discussing about the current project, and suddenly sir and my senior started talking about the new feature to add in the project , and i become lost :). i was not able to find how i should provide my input for the new feature. So i want to know what things should be discussed for developing new feature in project and how we can contribute in requirement talk of new features. Please suggest.

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Google page events monitoring and analysis

    - by Homunculus Reticulli
    I have read the Google page event documentation, but I am not sure I understand it correctly. I am new to Google analytics, and I have two questions: Once I have google analytics enabled for my site (i.e. I have inserted the tracking code in my pages etc), do I need to set anything else up (at the Google end - i.e. in my Google analytics account) It is not clear to me how the event data particularly, relating to how the data can be aggregated and analyzed. For instance, if I want to track an event under category category for click action action, I will use the following code snippet: <a href="some-uri.htm" onclick="_gaq.push(['_trackEvent', 'category', 'action', 'label']);">Do Something</a> For the sake of simplicity, lets say I am interested in monitoring click events in my header and footer, and I want to find which pages the header and or footer is clicked most often. How would I set things up so that I can analyze the header/footer clicks aggregated at the page level?

    Read the article

  • Working with Analytic Workflow Manager (AWM) - Part 8 Cube Metadata Analysis

    - by Mohan Ramanuja
    CUBE SIZEselect dbal.owner||'.'||substr(dbal.table_name,4) awname, sum(dbas.bytes)/1024/1024 as mb, dbas.tablespace_name from dba_lobs dbal, dba_segments dbas where dbal.column_name = 'AWLOB' and dbal.segment_name = dbas.segment_name group by dbal.owner, dbal.table_name, dbas.tablespace_name order by dbal.owner, dbal.table_name SESSION RESOURCES select vses.username||':'||vsst.sid username, vstt.name, max(vsst.value) valuefrom v$sesstat vsst, v$statname vstt, v$session vseswhere vstt.statistic# = vsst.statistic# and vsst.sid = vses.sid andVSES.USERNAME LIKE ('ATTRIBDW_OWN') ANDvstt.name in ('session pga memory', 'session pga memory max', 'session uga memory','session uga memory max', 'session cursor cache count', 'session cursor cache hits', 'session stored procedure space', 'opened cursors current', 'opened cursors cumulative') andvses.username is not null group by vsst.sid, vses.username, vstt.name order by vsst.sid, vses.username, vstt.name OLAP PGA USE select 'OLAP Pages Occupying: '|| round((((select sum(nvl(pool_size,1)) from v$aw_calc)) / (select value from v$pgastat where name = 'total PGA inuse')),2)*100||'%' info from dual union select 'Total PGA Inuse Size: '||value/1024||' KB' info from v$pgastat where name = 'total PGA inuse' union select 'Total OLAP Page Size: '|| round(sum(nvl(pool_size,1))/1024,0)||' KB' info from v$aw_calc order by info desc OLAP PGA USAGE PER USER select vs.username, vs.sid, round(pga_used_mem/1024/1024,2)||' MB' pga_used, round(pga_max_mem/1024/1024,2)||' MB' pga_max, round(pool_size/1024/1024,2)||' MB' olap_pp, round(100*(pool_hits-pool_misses)/pool_hits,2) || '%' olap_ratio from v$process vp, v$session vs, v$aw_calc va where session_id=vs.sid and addr = paddr CUBE LOADING SCRIPT REM The 'set define off' statement is needed only if running this script through SQLPlus.REM If you are using another tool to run this script, the line below may be commented out.set define offBEGIN  DBMS_CUBE.BUILD(    'VALIDATE  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/BEGIN  DBMS_CUBE.BUILD(    '  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/ VISUALIZATION OBJECT - AW$ATTRIBDW_OWN  CREATE TABLE "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"        (            "PS#"    NUMBER(10,0),            "GEN#"   NUMBER(10,0),            "EXTNUM" NUMBER(8,0),            "AWLOB" BLOB,            "OBJNAME"  VARCHAR2(256 BYTE),            "PARTNAME" VARCHAR2(256 BYTE)        )        PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE        (            BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "AWLOB"        )        STORE AS SECUREFILE        (            TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        PARTITION BY RANGE        (            "GEN#"        )        SUBPARTITION BY HASH        (            "PS#",            "EXTNUM"        )        SUBPARTITIONS 8        (            PARTITION "PTN1" VALUES LESS THAN (1) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE READS LOGGING NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP661" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP662" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP663" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP664" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP665" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP666" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP667" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP668" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" ) ,            PARTITION "PTNN" VALUES LESS THAN (MAXVALUE) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP669" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP670" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP671" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP672" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP673" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP674" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP675" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP676" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" )        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."ATTRIBDW_OWN_I$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        "PS#", "GEN#", "EXTNUM"    )    PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE    (        INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT    )    TABLESPACE "ATTRIBDW_DATA" ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000406980C00004$$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOCAL (PARTITION "SYS_IL_P711" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP695" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP696" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP697" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP698" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP699" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP700" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP701" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP702" TABLESPACE "ATTRIBDW_DATA" ) , PARTITION "SYS_IL_P712" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP703" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP704" TABLESPACE        "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP705" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP706" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP707" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP708" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP709" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP710" TABLESPACE "ATTRIBDW_DATA" ) ) PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE BUILD LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_BUILD_LOG"        (            "BUILD_ID"          NUMBER,            "SLAVE_NUMBER"      NUMBER,            "STATUS"            VARCHAR2(10 BYTE),            "COMMAND"           VARCHAR2(25 BYTE),            "BUILD_OBJECT"      VARCHAR2(30 BYTE),            "BUILD_OBJECT_TYPE" VARCHAR2(10 BYTE),            "OUTPUT" CLOB,            "AW"            VARCHAR2(30 BYTE),            "OWNER"         VARCHAR2(30 BYTE),            "PARTITION"     VARCHAR2(50 BYTE),            "SCHEDULER_JOB" VARCHAR2(100 BYTE),            "TIME" TIMESTAMP (6)WITH TIME ZONE,        "BUILD_SCRIPT" CLOB,        "BUILD_TYPE"            VARCHAR2(22 BYTE),        "COMMAND_DEPTH"         NUMBER(2,0),        "BUILD_SUB_OBJECT"      VARCHAR2(30 BYTE),        "REFRESH_METHOD"        VARCHAR2(1 BYTE),        "SEQ_NUMBER"            NUMBER,        "COMMAND_NUMBER"        NUMBER,        "IN_BRANCH"             NUMBER(1,0),        "COMMAND_STATUS_NUMBER" NUMBER,        "BUILD_NAME"            VARCHAR2(100 BYTE)        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "OUTPUT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        LOB        (            "BUILD_SCRIPT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00013$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00007$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG" ( PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE DIMENSION COMPILE  CREATE TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"        (            "ID"               NUMBER,            "SEQ_NUMBER"       NUMBER,            "ERROR#"           NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE"    VARCHAR2(2000 BYTE),            "DIMENSION"        VARCHAR2(100 BYTE),            "DIMENSION_MEMBER" VARCHAR2(100 BYTE),            "MEMBER_ANCESTOR"  VARCHAR2(100 BYTE),            "HIERARCHY1"       VARCHAR2(100 BYTE),            "HIERARCHY2"       VARCHAR2(100 BYTE),            "ERROR_CONTEXT" CLOB        )        SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "ATTRIBDW_DATA" LOB        (            "ERROR_CONTEXT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION"IS    'Name of dimension being compiled';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION_MEMBER"IS    'Problem dimension member';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."MEMBER_ANCESTOR"IS    'Problem dimension member''s parent';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY1"IS    'First hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY2"IS    'Second hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_CONTEXT"IS    'Extra information for error';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"IS    'Cube dimension compile log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407307C00010$$" ON "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE( INITIAL 1048576 NEXT 1048576 MAXEXTENTS 2147483645) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE OPERATING LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"        (            "INST_ID"    NUMBER NOT NULL ENABLE,            "SID"        NUMBER NOT NULL ENABLE,            "SERIAL#"    NUMBER NOT NULL ENABLE,            "USER#"      NUMBER NOT NULL ENABLE,            "SQL_ID"     VARCHAR2(13 BYTE),            "JOB"        NUMBER,            "ID"         NUMBER,            "PARENT_ID"  NUMBER,            "SEQ_NUMBER" NUMBER,            "TIME" TIMESTAMP (6)WITH TIME ZONE NOT NULL ENABLE,        "LOG_LEVEL"    NUMBER(4,0) NOT NULL ENABLE,        "DEPTH"        NUMBER(4,0),        "OPERATION"    VARCHAR2(15 BYTE) NOT NULL ENABLE,        "SUBOPERATION" VARCHAR2(20 BYTE),        "STATUS"       VARCHAR2(10 BYTE) NOT NULL ENABLE,        "NAME"         VARCHAR2(20 BYTE) NOT NULL ENABLE,        "VALUE"        VARCHAR2(4000 BYTE),        "DETAILS" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "DETAILS"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."INST_ID"IS    'Instance ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SID"IS    'Session ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SERIAL#"IS    'Session serial#';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."USER#"IS    'User ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SQL_ID"IS    'Executing SQL statement ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."JOB"IS    'Identifier of job';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."PARENT_ID"IS    'Parent operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."TIME"IS    'Time of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."LOG_LEVEL"IS    'Verbosity level of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DEPTH"IS    'Nesting depth of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."OPERATION"IS    'Current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SUBOPERATION"IS    'Current suboperation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."STATUS"IS    'Status of current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."NAME"IS    'Name of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."VALUE"IS    'Value of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DETAILS"IS    'Extra information for record';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"IS    'Cube operations log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407301C00018$$" ON "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE REJECTED RECORDS CREATE TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"        (            "ID"            NUMBER,            "SEQ_NUMBER"    NUMBER,            "ERROR#"        NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE" VARCHAR2(2000 BYTE),            "RECORD#"       NUMBER(38,0),            "SOURCE_ROW" ROWID,            "REJECTED_RECORD" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "REJECTED_RECORD"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."RECORD#"IS    'Rejected record number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SOURCE_ROW"IS    'Rejected record''s ROWID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."REJECTED_RECORD"IS    'Rejected record copy';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"IS    'Cube rejected records log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407304C00007$$" ON "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;

    Read the article

  • SSISDB Analysis Script on Gist

    - by Davide Mauri
    I've created two simple, yet very useful, script to extract some useful data to quickly monitor SSIS packages execution in SQL Server 2012 and after.get-ssis-execution-status  get-ssis-data-pumped-rows  I've started to use gist since it comes very handy, for this "quick'n'dirty" scripts and snippets, and you can find the above scripts and others (hopefully the number will increase over time...I plan to use gist to store all the code snippet I used to store in a dedicated folder on my machine) there.Now, back to the aforementioned scripts. The first one ("get-ssis-execution-status") returns a list of all executed and executing packages along with latest successful and running executions (so that on can have an idea of the expected run time)error messageswarning messages related to duplicate rows found in lookupsthe second one ("get-ssis-data-pumped-rows") returns information on DataFlows status. Here there's something interesting, IMHO. Nothing exceptional, let it be clear, but nonetheless useful: the script extract information on destinations and row sent to destinations right from the messages produced by the DataFlow component. This helps to quickly understand how many rows as been sent and where...without having to increase the logging level.Enjoy! PSI haven't tested it with SQL Server 2014, but AFAIK they should work without problems. Of course any feedback on this is welcome. 

    Read the article

  • Software Usability analysis

    - by Afnan
    i am unable to find the answers to the following questions.Please help me resolve (a) Name quantitative and qualitative techniques for analysing the usability of a software product. (b) Compare the costs and bene?ts of the quantitative techniques. (c) Compare the costs and bene?ts of the qualitative techniques. (d ) If restricted to a single one of these techniques when designing a new online banking system, which would you choose and why?

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • Free tools for SQL Server - Automating Execution Plan Analysis

    - by jchang
    Since this topic is being discussed, I will plug my own tools, SQL Exec Stats and (a little dated) documentation the main capability is cross-referencing index usuage with specific execution plans. another feature is generating execution plans for all stored procedures in a database, along with the index usage cross-reference. There are several sources of execution plans or plan handles, this could be a live trace, a previously saved trace, previously saved sqlplan files, from dm_exec_cached_plans,...(read more)

    Read the article

  • Create SQL Server Analysis Services Partitions using AMO

    When you have SSAS cubes with millions of rows of data, it is very helpful to create partitions. If you have a few cubes you could probably do this manually, but if there are many or if you want to automate this process you should look for smarter solutions such as programming the creation of partitions dynamically. NEW! Never waste another weekend deployingDeploy SQL Server changes and ASP .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try it now.

    Read the article

  • Evidence for automatic browsing - Log file analysis

    - by Nilani Algiriyage
    I'm analyzing web server logs both in Apache and IIS log formats. I want to find the evidence for automatic browsing, like web robots, spiders, bots, etc. I used python robot-detection 0.2.8 for detecting robots in my log files, but I know there may be other robots (automatic programs) which have traversed through the web site but robot-detection can not identify. So I want to ask: Are there any specific clues that can be found in log files that human users do not leave but automated software would? Do they follow a specific navigation pattern? I saw some requests for favicon.ico - does this implicate that it is a automatic browsing?. I found this article and this question with some valuable points.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >