Search Results

Search found 2559 results on 103 pages for 'analysis'.

Page 19/103 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • assistance with fxcop

    - by amateur
    I am at present developing a mvc4 project that comunicates to a set of wcf services. I am setting such up in tfs build for a team of developers. I am very much a newbie to fxcop and code analysis in general. I am currently researching it and have some questions following this: Is it recommended to use the rules that come with fxcop? Should it be included as a build task during builds? What is the value from it? Are there guidelines to what rules to abide by or is it best to go with the default? Is it correct to run the analysis as a post build event? I am a newbie to fxcop and would like some feedback. I am as it is integrating stylecop in to my build.

    Read the article

  • Query Execution Failed in Reporting Services reports

    - by Chris Herring
    I have some reporting services reports that talk to Analysis Services and at times they fail with the following error: An error occurred during client rendering. An error has occurred during report processing. Query execution failed for dataset 'AccountManagerAccountManager'. The connection cannot be used while an XmlReader object is open. This occurs sometimes when I change selections in the filter. It also occurs when the machine has been under heavy load and then will consistently error until SSAS is restarted. The log file contains the following error: processing!ReportServer_0-18!738!04/06/2010-11:01:14:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'. ---> System.InvalidOperationException: The connection cannot be used while an XmlReader object is open. at Microsoft.AnalysisServices.AdomdClient.XmlaClient.CheckConnection() at Microsoft.AnalysisServices.AdomdClient.XmlaClient.ExecuteStatement(String statement, IDictionary connectionProperties, IDictionary commandProperties, IDataParameterCollection parameters, Boolean isMdx) at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IExecuteProvider.ExecuteTabular(CommandBehavior behavior, ICommandContentProvider contentProvider, AdomdPropertyCollection commandProperties, IDataParameterCollection parameters) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.DataExtensions.AdoMdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.OnDemandProcessing.RuntimeDataSet.RunDataSetQuery() Can anyone shed light on this issue?

    Read the article

  • schedule backup and restore of SSAS 2008 database

    - by Manjot
    Hi, I can backup and restore databases on Microsoft SQL server Analysis Service 2008 using GUI as from Backup SSAS I want to schedule backup and restore it to another server every night. so what i did is : I scripted out the backup and restore process from the GUI. Created a new SQL server agent job in database engine and added a "Run SSAS query" step. Copied the scripts to this step. But it fails. the scripts that the GUI copied out look like: <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <Object> <DatabaseID>DB</DatabaseID> </Object> <File>C:\Backup\DB.abf</File> <AllowOverwrite>true</AllowOverwrite> </Backup> <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <File>\\server\C$\Backup\DB.abf</File> <DatabaseName>DB</DatabaseName> <AllowOverwrite>true</AllowOverwrite> </Restore> Any help please?

    Read the article

  • Query Execution Failed in Reporting Services reports

    - by Chris Herring
    I have some reporting services reports that talk to Analysis Services and at times they fail with the following error: An error occurred during client rendering. An error has occurred during report processing. Query execution failed for dataset 'AccountManagerAccountManager'. The connection cannot be used while an XmlReader object is open. This occurs sometimes when I change selections in the filter. It also occurs when the machine has been under heavy load and then will consistently error until SSAS is restarted. The log file contains the following error: processing!ReportServer_0-18!738!04/06/2010-11:01:14:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'. ---> System.InvalidOperationException: The connection cannot be used while an XmlReader object is open. at Microsoft.AnalysisServices.AdomdClient.XmlaClient.CheckConnection() at Microsoft.AnalysisServices.AdomdClient.XmlaClient.ExecuteStatement(String statement, IDictionary connectionProperties, IDictionary commandProperties, IDataParameterCollection parameters, Boolean isMdx) at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IExecuteProvider.ExecuteTabular(CommandBehavior behavior, ICommandContentProvider contentProvider, AdomdPropertyCollection commandProperties, IDataParameterCollection parameters) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.DataExtensions.AdoMdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.OnDemandProcessing.RuntimeDataSet.RunDataSetQuery() Can anyone shed light on this issue?

    Read the article

  • What is a name for a job where you do system analysis, project management and data diagramming?

    - by David Archer
    In the last 4 months I've been able to manage a team and step away from the coding for a bit. I've been planning the system in full (both System Analysis and project managing, alongside action and data diagramming) writing the technical documentation, the code's architecture, keeping track of the other guys doing the actual coding, QA, bug reports and dealing with clients. I had to take two days' training on node.js just to see if it would be suitable for a project we were considering. Is there a name for this job? Project Manager and Systems Architect don't quite seem to have the same stuff, and IT manager seems way off. I only want to know so that I can get some qualification towards it and try to move into this kind of work full-time.

    Read the article

  • Where do I find scripts generated by SharePoint MCMS Migration Profiles

    - by HipCzeck
    I am attempting to migrate data from an Microsoft Content Management Server (MCMS) 2002 instance into a new Microsoft Office Sharepoint Server (MOSS) 2007 installation using the Manage Microsoft Content Management Server Migration Profiles tool in the Operations space of MOSS Central Administration. When analyzing the profile, I receive 4 warnings, all of which may be safely ignored, but when I actually execute the migration profile, I get the same warnings and an additional error with a description of: Line 6: Incorrect syntax near ';'. I have seen this error numerous times when mucking about in SQL Server and recognize it as a Transact SQL error message, but can't find the actual SQL statement that is being executed so that I may determine the source of the error. EDIT: After enabling verbose logging on the MCMS 2002 Migration category, and poring through the Unified Logging Service (ULS) logs, I received a more complete stack trace at the point of the error, and a couple more anomalies listed below. Anomalies: The following is an abbreviated listing from the ULS logs around the time of the pre-migration analysis. 01 MCMS 2002 Migration Verbose Start ConnectionCheck 02 MCMS 2002 Migration Verbose End ConnectionCheck 03 MCMS 2002 Migration Verbose Start DatabaseCheck 04 MCMS 2002 Migration High Extra table SiteDeployLock will not be migrated 05 MCMS 2002 Migration High Analysis: Extra index PK__SiteDeployLock__05D8E0BE 06 MCMS 2002 Migration Verbose End DatabaseCheck 07 MCMS 2002 Migration Medium Pre-migration analysis: RootCheckTask is skipped because database check is blocked. 08 MCMS 2002 Migration Medium Pre-migration analysis: RightsGroupNameCheckTask is skipped because database check is blocked. 09 MCMS 2002 Migration Medium Pre-migration analysis: InvalidNameCheckTask is skipped because database check is blocked. 10 MCMS 2002 Migration Medium Pre-migration analysis: LeafNameCheckTask is skipped because database check is blocked. 11 MCMS 2002 Migration Medium Pre-migration analysis: LeafLengthCheckTask is skipped because database check is blocked. 12 MCMS 2002 Migration Medium Pre-migration analysis: TemplateNameCheckTask is skipped because database check is blocked. 13 MCMS 2002 Migration Medium Pre-migration analysis: TemplateCollisionCheckTask is skipped because database check is blocked. 14 MCMS 2002 Migration Medium Pre-migration analysis: PlaceholderCheckTask is skipped because database check is blocked. 15 MCMS 2002 Migration Medium Pre-migration analysis: CheckedOutItemsCheckTask is skipped because database check is blocked. 16 MCMS 2002 Migration Medium Pre-migration analysis: SubmittedItemsCheckTask is skipped because database check is blocked. 17 MCMS 2002 Migration Medium Pre-migration analysis: DeletedItemsCheckTask is skipped because database check is blocked. 18 MCMS 2002 Migration Medium Pre-migration analysis: UserCheckTask is skipped because database check is blocked. 19 MCMS 2002 Migration Medium Pre-migration analysis: FileSizeCheckTask is skipped because database check is blocked. 20 MCMS 2002 Migration Medium Pre-migration analysis: HostHeaderMapCheckTask is skipped because database check is blocked. 21 MCMS 2002 Migration Verbose Start Server check 22 MCMS 2002 Migration Verbose End Server check 23 MCMS 2002 Migration Verbose Start Server emptyness check 24 MCMS 2002 Migration Verbose End Server emptyness check 25 MCMS 2002 Migration Medium PreMigrationAnalyzer: Dry run starts 26 MCMS 2002 Migration Verbose CleanLockProcedure: start. 27 MCMS 2002 Migration High CleanLockProcedure: connection system lock is null 28 MCMS 2002 Migration Verbose Finished all tasks 29 MCMS 2002 Migration High PreMigrationAnalyzer ends with True 30 MCMS 2002 Migration Verbose Migration profile status is changed to AnalysisPassed Specifically, the two High level alerts on lines 4 and 5 are reflected in the migration report as warnings when running Pre-migration Analysis or running the migration profile. In addition, two other warnings appear in the migration report indicating two tables containing data (LayoutProperty and NodeLayout) that should be empty. According to the documentation, warnings are not sufficient cause to stop migration from occurring. Other anomalies are on lines 7-20 indicating a series of tests that are skipped because database check is blocked. The ULS doesn't give any additional warnings to indicate that the database check was blocked or exited in exceptional circumstances. After switching the profile from pre-migration analysis to exporting, there is one medium level warning that LastChangeTime is not set or incorrect. (null). As with all the skipped test names and SQL table names from the warnings, the major search engines are unable (with the exception of LayoutProperty) to find any reference to these objects or tests. Finally, the section of the log indicating the actual live migration attempt is appended below: 01 MCMS 2002 Migration Medium LastChangeTime is not set or incorrect. (null) 02 MCMS 2002 Migration Verbose Set export lock 03 MCMS 2002 Migration Verbose CleanLockProcedure: start. 04 MCMS 2002 Migration Verbose CleanLockProcedure: end. 05 MCMS 2002 Migration Verbose Prepare for export 06 MCMS 2002 Migration Verbose Open connection... 07 MCMS 2002 Migration Verbose Create temporary stored procedures 08 MCMS 2002 Migration Verbose Create temporary tables... 09 MCMS 2002 Migration Verbose Initialize temporary tables... 10 MCMS 2002 Migration Verbose InitializeTemporaryTables: start 11 MCMS 2002 Migration Verbose Initialize export table... 12 MCMS 2002 Migration Verbose InitializeExportTable: start 13 MCMS 2002 Migration Verbose CleanLockProcedure: start. 14 MCMS 2002 Migration Verbose CleanLockProcedure: end. 15 MCMS 2002 Migration High Migration throws exception: Line 6: Incorrect syntax near ';'.. Stacktrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration... 16 MCMS 2002 Migration High ....MigrationBatchCommand.ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand.ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer.SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Admin... 17 MCMS 2002 Migration High ...stration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export(MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.MigrateInternal(). 18 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. Start. 19 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. End. 20 MCMS 2002 Migration Verbose Migration profile status is changed to Failed The stack trace of the failed parsing of the SQL command appear on lines 15-17. A cleaner version of the stack trace is appended below. Full Stack Trace: Migration throws exception: Line 6: Incorrect syntax near ';'.. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning( TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer .SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export (MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration .MigrateInternal(). None of this log information indicates the SQL command that is failing a parser check. I've checked the SQL servers hosting the source and destination databases for a trace of the query, but neither seems to have triggered the parse failure condition. That appears to have happened on the SharePoint server. Are there any other locations I should investigate that might tell me where to find the source of the error?

    Read the article

  • Findbugs and comparing

    - by Rob Goodwin
    I recently started using the findbugs static analysis tool in a java build I was doing. The first report came back with loads of High Priority warnings. Being the obsessive type of person, I was ready to go knock them all out. However, I must be missing something. I get most of the warnings when comparing things. Such as the following code: public void setSpacesPerLevel(int value) { if( value >= 0) { ... produces a high priority warning at the if statement that reads. File: Indenter.java, Line: 60, Type: BIT_AND_ZZ, Priority: High, Category: CORRECTNESS Check to see if ((...) & 0) == 0 in sample.Indenter.setSpacesPerLevel(int) I am comparing an int to an int, seems like a common thing. I get quite a few of that type of error with similar simple comparisons. I have alot of other high priority warnings on what appears to be simple code blocks. Am I missing something here? I realize that static analysis can produce false positives, but the errors I am seeing seem too trivial of a case to be a false positive. This one has me scratching my head as well. for(int spaces = 0;spaces < spacesPerLevel;spaces++){... Which gives the following findbugs warning: File: Indenter.java, Line: 160, Type: IL_INFINITE_LOOP, Priority: High, Category: CORRECTNESS There is an apparent infinite loop in sample.Indenter.indent() This loop doesn't seem to have a way to terminate (other than by perhaps throwing an exception). Any ideas? So basically I have a handful of files and 50-60 high priority warnings similar to the ones above. I am using findbugs 1.3.9 and calling it from the findbugs ant task

    Read the article

  • Should I amortize scripting cost via bytecode analysis or multithreading?

    - by user18983
    I'm working on a game sort of thing where users can write arbitrary code for individual agents, and I'm trying to decide the best way to divide up computation time. The simplest option would be to give each agent a set amount of time and skip their turn if it elapses without an action being decided upon, but I would like people to be able to write their agents decision functions without having to think too much about how long its taking unless they really want to. The two approaches I'm considering are giving each agent a set number of bytecode instructions (taking cost into account) each timestep, and making players deal with the consequences of the game state changing between blocks of computation (as with Battlecode) or giving each agent it's own thread and giving each thread equal time on the processor. I'm about equally knowledgeable on both concurrency and bytecode stuff, which is to say not very, so I'm wondering which approach would be best. I have a clearer idea of how I'd structure things if I used bytecode, but less certainty about how to actually implement the analysis. I'm pretty sure I can work up a concurrency based system without much trouble, but I worry it will be messier with more overhead and will add unnecessary complexity to the project.

    Read the article

  • Performance comparison of Dictionaries

    - by Hun1Ahpu
    I'm interested in performance values (big-O analysis) of Lookup and Insert operation for .Net Dictionaries: HashTable, SortedList, StringDictionary, ListDictionary, HybridDictionary, NameValueCollection Link to a web page with the answer works for me too.

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

  • Critiquing PHP-code / PerlCritic for PHP?

    - by jeekl
    I'm looking for an equivalent of PerlCritic for PHP. PerlCritc is a static source code analyzer that qritiques code and warns about everything from unused variables, to unsafe ways to handle data to almost anything. Is there such a thing for PHP that could (preferably) be run outside of an IDE, so that source code analysis could be automated?

    Read the article

  • Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

    - by James Rogers
    Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.   Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.   Pros:  Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube. User access can continue while this ProcessIndexes in underway.   Cons: Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building   <Batch http://schemas.microsoft.com/analysisservices/2003/engine"http://schemas.microsoft.com/analysisservices/2003/engine">  <Parallel>     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">       <Object>         <DatabaseID>MyDatabase</DatabaseID>         <CubeID>MyCube</CubeID>       </Object>       <Type>ProcessIndexes</Type>       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>     </Process>  </Parallel> </Batch>   The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.   Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.   Things that did not work: ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.   NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

    Read the article

  • Can you authenticate into SSAS with AD LDS (ADAM) accounts?

    - by Jaxidian
    I'm very new to AD LDS and experienced but not qualified with SSAS, so my apologies for my ignorances with these. We have a couple implementations where we expose SSAS via an HTTPS proxy (msmdpump.dll) and currently we have a temporary domain setup handling this (where our end-users have a second account+creds to manage because of this = non-ideal). I want to move us towards a more permanent solution which I'm thinking of moving all authentication to AD LDS for our web apps, SSAS, and others. However, SSAS is where I'm concerned about this. I know SSAS requires Windows Authentication and to play nicely, and that this ultimately means Active Directory will be involved. Is there a way to get this done with AD LDS instead of having to use a full AD DS implementation? If so, how? (Note: My question over at StackOverflow had a suggestion that I post this question here on ServerFault instead. My apologies if I'm not asking in the right forum.)

    Read the article

  • What consequences to take from what i read in logfiles?

    - by Helene Bilbo
    Since some weeks i manage my first Webserver, a Seaside application behind an Apache proxy on Linode, and i installed logwatch to send me daily logs. Where can i get information on when i have to act as a consequence of what i read in these logwatch reports? For example i read that all kinds of people try to login on funny nonexisting accounts or all kinds of webcrawlers test for nonexisting cms login pages, some ip adresses get banned and unbanned by fail2ban... I assume that's normal? Is it? But how do i know that i probably have to do something? What do i look for in the logs?

    Read the article

  • What is the best/easiest way to use scripts to analyze network traffic?

    - by yungin
    I'm looking to analyze packets via scripts. I'd like to use something high level. I'm in a mac/linux environment. I'm currently looking at different python+libpcap libraries. Perhaps lua+wireshark too. Maybe tcpdump+bash (but not sure that has a lot of info i can use). I also heard good things about scapy. Not sure. I'm wondering if you have any recommendations? There's quite a few of them out there. What have you found that works best? I'd definitely want something scriptable not something that I need to compile (like c/c++, etc)

    Read the article

  • Log application changes made to the system

    - by Maxim Veksler
    Hello, Windows 7, 64bit. I have an application which I don't trust but still need to run. I would like to run the installer of this application and later on the installed executable under some kind of "strace" for windows which will record what this application did to the system. Mainly: What files have been created / edited? What registery changed have been made? To what network hosts did the application tried to communicate? Ideally I would also be able to generate a "UNDO" action to undo all the changes. Please don't suggest full Virtualization solutions such as Virtualbox, VMWare and co. because the application should run in the host system (A "sandbox" approach will OTHO be accepted, IMHO). Do you any such utility I can use? Thank you, Maxim.

    Read the article

  • How expensive to run PC 24/7 or how to figure out how to determine it?

    - by jasondavis
    I realize this question is difficult to answer as it would be different based on users location, what there PC is doing and what hardware it consist of, along with other factors but I am hoping someone could give me a very rough estimate. I have always ran many PC's in my home 24/7 and I am just now looking at it from a money/cost of electric point of view. 1) I live in Central Florida. Can anyone guesstomate/estimate the avaerage monthly or daily cost of running your average PC? Intel quad core processor, 1 SSD drive for OS and programs and 4-5 1-2 TB hard drives in a RAID setup for data. 750watt PSU. What would your guess be? 2) Also is there an accurate way to figure this out (non-super technical and confusing to a non-math person please) Also I have seen those kill-a-watt devices, do they figure this kind of stuff out for you? 3) Does a larger PSU make your PC consume more power? Thanks for any help, you can most likely tell I am somewhat lost about this!

    Read the article

  • What are my options with a downloaded Rackspace cloud image?

    - by schnippy
    I've got an unresponsive Rackspace slice that has defied all attempts at accessing. I created an emergency image from this and deleted it, downloading the files that compromise the image to a local source. There are a number of files / assets I would still like to recover from this server if possible but not sure exactly what I can do with the image files, if anything. Here's the files I have, for what its worth: emergency_########_######_cloudserver########.tar.gz.0 (5gb) emergency_########_######_cloudserver########.tar.gz.1 (5gb) emergency_########_######_cloudserver########.tar.gz.2 (5gb) emergency_########_######_cloudserver########.tar.gz.3 (50mb) emergency_########_######_cloudserver########.yml (25kb) Is it possible to mount this image as a drive? Are there other forensic recovery options?

    Read the article

  • Cloud computing?

    - by Shawn H
    I'm an analyst and intermediate programmer working for a consulting company. Sometimes we are doing some intensive computing in Excel which can be frustrating because we have slow computers. My company does not have enough money to buy everyone new computers right now. Is there a cloud computing service that allows me to login to a high performance virtual computer from remote desktop? We are not that technical so preferrably the computer is running Windows and I can run Excel and other applications from this computer. Thanks

    Read the article

  • Connection error when trying to browse SSAS cube in BIDS

    - by lance
    SQL Server 2008 db instance is installed on a win 2003 server, networked (but no domain) VM. I can browse a cube from BIDS on the same VM, but when I try to browse the same cube on a windows 7 (home premium) networked machine, I get a connection error. The error suggests checking the datasource settings which I have, and when I click "test connection" on the datasource it is successful. Looking for probably causes and solutions?

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • Building a Data Warehouse

    - by Paul
    I've seen tutorials articles and posts on how to build datawarehouses with star and snowflakes schemas, denormalization of OLTP databases fact and dimension tables and so on. Also seen comments like: Star schemas are for datamarts, at best. There is absolutely no way a true enterprise data warehouse could be represented in a star schema, or snowflake either. I want to create a database that will server for reporting services and maybe (if that isn't enough) install analisys services and extract reports and data from cubes. My question was : Is it really necesarry to redesign my current database and follow the star/snowflake schemas with fact and dimension tables ? Thank you

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >