Search Results

Search found 3137 results on 126 pages for 'reporting avatar'.

Page 46/126 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • Using ReportViewer 9 control in VS 2010

    - by Fermin
    Hi, I am writing an ASP.NET app that uses a SQL Server 2005 with SSRS setup. I want to use the ReportViewer control but I get an error when using ReportViewer 10 because it needs SSRS 2008. How can I use ReportViewer 9 within my application. I've added a reference to the Microsoft.ReportViewer.WebForms.dll version 9 and removed the reference to version 10. My markup is as follows: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> <!-- standard markup --> <rsweb:ReportViewer ID="ReportViewer1" runat="server"></rsweb:ReportViewer> but when I try to run this I get the following error: CS0433: The type 'Microsoft.Reporting.WebForms.ReportViewer' exists in both 'c:\WINDOWS\assembly\GAC_MSIL\Microsoft.ReportViewer.WebForms\10.0.0.0__b03f5f7f11d50a3a\Microsoft.ReportViewer.WebForms.dll' and 'c:\WINDOWS\assembly\GAC_MSIL\Microsoft.ReportViewer.WebForms\9.0.0.0__b03f5f7f11d50a3a\Microsoft.ReportViewer.WebForms.dll' What have I missed!? Update: When trying to use the ReportViewer 10 I get the following error: "Remote report processing requires Microsoft SQL Server 2008 Reporting Services or later."

    Read the article

  • "Unable To Load Client Print Control" - SSRS Printing problems again

    - by mamorgan1
    Please forgive me as my head is spinning. I have tried so many solutions to this issue, that I'm almost not sure where I am at this point. At this point in time I have these issues in my Production, Test, and Dev environments. For simplicity sake, I will just try to get it working in Dev first. Here is my setup: Database/Reporting Server (Same server): Windows Server 2003 SP2 SQL Server 2005 SP3 Development Box: Windows 7 Visual Studio 2008 SP1 SQL Server 2008 SP1 (not being used in this case, but wanted to include it in case it is relative) Internet Explorer 8 Details: * I have a custom ASP.NET application that is using ReportViewer to access reports on my Database/Reporting Server. * I am able to connect directly to Report Manager and print with no trouble. * When I view source on the page with ReportViewer, it says I'm am using version 9.0.30729.4402 . * The classid of the rsclientprint.dll that keeps getting installed to my c:\windows\downloaded program files directory is {41861299-EAB2-4DCC-986C-802AE12AC499}. * I have tried taking the rsclientprint.cab file from my Database/Reporting Server and installing it directly to my Development Box and had no success. I made sure to unregister the previously installed dll first. I feel like I have read as many solutions as I can, and so I turn to you for some assistance. Please let me know if I can provide further details that would be helpful. Thanks

    Read the article

  • Overriding windows authentication for a .NET application

    - by JoshReedSchramm
    I have a .NET application where the homepage (default.aspx) should be accessible by anyone. There is also a reporting page (reporting.aspx) that I want to secure via windows authentication and only allow access to a particular set of AD groups. Right now the way my web.config is setup it is securing the reporting page but on the home page it prompts the user for login credentials. If they hit esc they can continue to the page though so it isnt actually securing it. I need to prevent it from prompting the user. How do i need to setup my config. Here is what i have now - <system.web> <authentication mode="Windows" /> <identity impersonate="true" /> <authorization> <allow roles="BUILTIN\Administrators, DomainName\Manager" /> <deny users="?" /> </authorization> ...MORE STUFF... </system.web> <location path="default.aspx"> <system.web> <identity impersonate="false" /> <authorization> <allow users="*"/> </authorization> </system.web> </location>

    Read the article

  • Object model design choice

    - by spinon
    I am currently working on a ASP.NET MVC reporting application using C#. This is a redesign from a PHP application that was just initially thrown together and is now starting to gain some more traction. SowWe are in the process of reworking the backend to have a more OO approach. One of the descisions I am currently wrestling with is how to structure the domain objects. Since 95% of the site is readonly I am not sure if the typical approaches are practical. Should I create domain objects for the primary pieces of the application (ticket, assignment, assignee) and then create static methods off of these areas to pull the reporting data? Or should I just skip that part and create the chart data classes and have some get method off of these classes? It's not a real big application and currenlty I am the only one developing on it. But I feel torn as to which approach. I feel that the first one is the better choice but maybe overkill given that the majority of uses is for aggregate reporting. Anybody have some good insight on why I should go one way or another?

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Virtual Disk Degraded

    - by TheD
    There is a physical DC with a Raid 1 Mirror, 2 Physical Disks, 500GB each. Dell Server Administrator is installed on the DC, and is reporting both physical disks are fine, online, in a good state etc. On a PERC S300 Raid Controller: Physical Disk 0:0 Physical Disk 0:1 However at the same time it's reporting that a virtual disk is degraded, what exactly does this mean? The virtual disk indicates it's State is in a Raid 1 Layout. Device Name: Windows Disk 0 If my understanding is correct then the Virtual Disk, when you drill down into Dell OpenManage should have both physical disks as members, as it is a mirror? Is this correct? However, when I drill down into the Virtual Disk, it only displays Physical Disk 0:0 included in Virtual Disk 1. I'm very new to server side/raid management etc. just while our server techy is away! Thanks!

    Read the article

  • VMware Workstation Error: Cannot find a valid peer process

    - by Robert Claypool
    I am running VMware Workstation 6.1.5 (build-126130) on CentOS 5.3 (Final). One of the guest machines is reporting an error when I try to power on the most recent snapshot. Snapshots further back in the timeline will power on without any problem. Error: Unable to change virtual machine power state: Cannot find a valid peer process to connect to. Apparently I'm not the only one with this problem. Others have been reporting it since at least early 2005. The forums say to delete unused lock files and restart any hung VMware processes (or restart the host machine), which I have done. Still no luck. Any other ideas?

    Read the article

  • SATA-II motherboard and drives but only connected at SATA-I

    - by Shevek
    I have an Abit AB9 QuadGT motherboard which has a SATA-II controller. Connected to it I have a Kingston SSDNow V Series 64GB as boot drive and a Seagate Barracuda 7200.10 as a data drive. I also have 2 x Optiarc AD-7170S DVD burners attached by SATA. Both SSD and HDD are SATA-II and the optical drives are SATA-I. I have just run CrystalDiskInfo and this is reporting that both SSD and HDD are connected at SATA-I (1.5 Gbps), not SATA-II (3.0 Gbps). I have the BIOS set up to use SATA drives in IDE mode. So a few questions: Is CrystalDiskInfo reporting correctly? Are the optical drives causing the SSD & HDD to connect at the slower rate? Is there any setting to force the SSD & HDD to use SATA-II? I'm running Windows 7 Ultimate.

    Read the article

  • SATA 300 mobo and drives but only connected at SATA 150

    - by Shevek
    I have an Abit AB9 QuadGT motherboard which supports SATA 300 drives Connected to it I have a Kingston SSDNow V Series 64GB as boot drive and a Seagate Barracuda 7200.10 as a data drive. I also have 2 x Optiarc AD-7170S DVD burners attached by SATA Both SSD and HDD are SATA 300 and the optical drives are SATA 150 I have just run CrystalDiskInfo and this is reporting that both SSD and HDD are connected at SATA 150, not SATA 300. I have the BIOS set up to use SATA drives in IDE mode. So a few questions: Is CrystalDiskInfo reporting correctly? Are the optical drives causing the SSD & HDD to connect at the slower rate? Is there any setting to force the SSD & HDD to use SATA 300? I'm running Windows 7 Ultimate

    Read the article

  • MySql Replication with a star topology

    - by Riotopsys
    My company currently operates in 3 separate locations connected by slow vpn links. Each site hosts a dedicated MySql server. I need to aggregate the data from all three of them onto a single server for corporate reporting. The powers that be have stated I cannot use circular replication or federated tables. Is there a third party tool for MySql that can replicate from multiple masters? Basically the diagram would be a daisy with the reporting server slave at center with multiple replication connections coming in from the master sites on the petals.

    Read the article

  • Stop SQL Server services from conveniently

    - by MedicineMan
    I have a general use laptop. I use it for games, development, and web surfing. I've just installed SQL Server 2008 with Analysis, Reporting, and Error reporting, as well as any of the other options on the installer. I also have a default instance of SQL server as well as a named instance. When I'm not doing development, I'd like to shut down these services conveniently. I'm thinking that a batch file would be good. What are the commands to shut these services down and release the associated memory and resources? It appears that: net stop MSSQLSERVER seems to stop the MSSQLSERVER instance. What about the other services?

    Read the article

  • What response should be made to a continued web-app crack attempt?

    - by Tchalvak
    I've issues with a continuous, concerted cracking attempt on a website (coded in php). The main problem is sql-injection attempts, running on a Debian server. A secondary effect of the problem is being spidered or repeatedly spammed with urls that, though a security hole has been closed, are still obviously related attempts to crack the site, and continue to add load to the site, and thus should be blocked. So what measures can I take to: A: Block known intruders/known attack machines (notably making themselves anonymous via botnet or relaying servers) to prevent their repeated, continuous, timed access from affecting the load of the site, and B: report & respond to the attack (I'm aware that the reporting to law enforcement is almost certainly futile, as may be reporting to the ip/machine where the attacks are originating, but other responses to take would be welcome).

    Read the article

  • Authentication problem: can't bypass the login prompt when browsing to the SQL Reporing Services website

    - by laurens
    I'm having a hard time configuring Reporting services on one of our servers. I'm not uninitiated in the domain of IIS7 but I cannot get rid of the login prompt when I'm surfing to the Reporting services website. What I did: I made a windows and SQL user with the same name: Then I choose Anonymous authentication in II7 and filled in the credentials of the specific R.S. user http://img32.imageshack.us/i/iis7auth.jpg/ I choose 'Local Service' as the service account in the R.S. configuration mgr http://img88.imageshack.us/i/rsconfigmgr.jpg/ The first problem is that there's always a pop-up when surfing to the website The second is that when I'm able to log in I get the message that the user doesn't have the appropriate permissions. The pop-up: http://img693.imageshack.us/i/loginpopup.jpg/ The server is a 2008 Web Server with SQL 2008 R2 Express. What am I doing wrong? Thanks in advance!

    Read the article

  • Authentication problem: can't bypass the login prompt when browsing to the SQL Reporing Services web

    - by laurens
    Hi all, I'm having a hard time configuring Reporting services on one of our servers. I'm not uninitiated in the domain of IIS7 but I cannot get rid of the login prompt when I'm surfing to the Reporting services website. What I did: I made a windows and SQL user with the same name: Then I choose Anonymous authentication in II7 and filled in the credentials of the specific R.S. user http://img32.imageshack.us/i/iis7auth.jpg/ I choose 'Local Service' as the service account in the R.S. configuration mgr http://img88.imageshack.us/i/rsconfigmgr.jpg/ The first problem is that there's always a pop-up when surfing to the website The second is that when I'm able to log in I get the message that the user doesn't have the appropriate permissions. The pop-up: http://img693.imageshack.us/i/loginpopup.jpg/ The server is a 2008 Web Server with SQL 2008 R2 Express. What I'm I doing wrong? Thanks in advance!

    Read the article

  • Win7 x64 unresponsive for a minute or so. HD failing?

    - by Gaia
    On a fully updated Win7 x64, every so often the system stalls for a minute or so. This has been going on for a couple months now. By stalling I mean the mouse responds and I can move windows around, but any window, any program, that is open becomes whiteish when I select it AND any new programs will not open. It doesn't matter what kind of program it is. When the stall stops all clicks I made (open new programs for example) take effect. Nothing shows up consistently (as in every time this happens) in the event log. Today though I was able to find something, but it doesn't reveal much other than the "system was unresponsive". It's a 7009 for "A timeout was reached (30000 milliseconds) while waiting for the Windows Error Reporting Service service to connect." It doesn't matter if I have any USB devices plug-in or not. I've ran Microsoft Security Essentials and Malwarebytes. While the machine is unresponsive, I've noticed that Drive D (the other partition on the single internal HD in this laptop) is displayed like this in explorer. This never occurs with Drive C or any other drive on the machine. . SMART report for the physical drive: Read benchmark by HD Tune 5 Pro, probably the most telling piece of the puzzle. Isn't this alone enough to see there is a problem with the drive, regardless of whether the unresponsiveness is caused by such purported problem? Here is a short hardware report: Computer: LENOVO ThinkPad T520 CPU: Intel Core i5-2520M (Sandy Bridge-MB SV, J1) 2500 MHz (25.00x100.0) @ 797 MHz (8.00x99.7) Motherboard: LENOVO 423946U Chipset: Intel QM67 (Cougar Point) [B3] Memory: 8192 MBytes @ 664 MHz, 9.0-9-9-24 - 4096 MB PC10600 DDR3 SDRAM - Samsung M471B5273CH0-CH9 - 4096 MB PC10600 DDR3 SDRAM - Patriot Memory (PDP Systems) PSD34G13332S Graphics: Intel Sandy Bridge-MB GT2+ - Integrated Graphics Controller [D2/J1/Q0] [Lenovo] Intel HD Graphics 3000 (Sandy Bridge GT2+), 3937912 KB Drive: ST320LT007, 312.6 GB, Serial ATA 3Gb/s Sound: Intel Cougar Point PCH - High Definition Audio Controller [B2] Network: Intel 82579LM (Lewisville) Gigabit Ethernet Controller Network: Intel Centrino Advanced-N 6205 AGN 2x2 HMC OS: Microsoft Windows 7 Professional (x64) Build 7601 The drive less than 1 year old. Do I have a defective drive? Seagate Tools diag says there is nothing wrong with the drive... UPDATE: I noticed that the windows error reporting service entered the running state then the stopped state and the space between the two events was exactly 2 minutes. Which error it was trying to report I don't know. I check the "Reliability Monitor" and it shows no errors to be reported. I've disabled the windows error reporting service to see if the problem stops.

    Read the article

  • Can't kill process TGitCache.exe

    - by ProfKaos
    Sometimes, I suspect when I open a music folder during the right moon phase and during a leap microsecond, this process crashes and pops up an error reporting dialogue. I decline to report the error, because that also fails by now, and choose Exit. Exit just delays the re-appearance of the error reporting dialogue for about 2 seconds. If I try and kill the process using SysInternals' Process Explorer the process is just restarted, only to crash again. So, I'm pretty sure another process, probably a service because TGitCache doesn't have a parent process and no other Git processes are visible, is keeping tabs on this process and restarting it if it dies. This is cruel and inhuman, but how can I find which nanny process is prolonging the agony?

    Read the article

  • SSRS Errors "Use Local", even though I am

    - by Corey Coogan
    I am at a loss. I posted this on SO, but think this is probably a better place. I have searched high and low and don't know what to do. I am running SQL Server Web Edition on Server 2008, which only supports local databases. I am trying to connect to localhost, but when I test my connection, I get this error. The feature: "The edition of Reporting Services that you are using requires that you use local SQL Server relational databases for report data sources and the report server database." is not supported in this edition of Reporting Services. The DB was upgraded from SQL Express and when I select @@version, it says it's Web Edition. I've tried rebooting and that seemed to fix it, but only for a little while.

    Read the article

  • Puppet:get real-time status of catalog evaluation and post to remote server

    - by txworking
    According to this article http://docs.puppetlabs.com/guides/puppet_internals.html There are four phases when puppet agent got a catalog from master. resource generation = relationships = evaluation = reporting Reporting As the transaction progresses, it collects logs and metrics on what it does. At the end of evaluation, it turns this information into a report, which it sends to the server (if requested). And at the end of evaluation puppet agent would generate a report and sent the report to the master. Is there a way to get real-time status of evaluation phase and post them to a remote logcollector? Glad for any suggestions.

    Read the article

  • What is the most ethically or morally questionable sysadmin task you have been given?

    - by Alex Angas
    In the recent past I was asked to set up a reporting facility for upper management so they can spy on what web sites users are visiting. This was done without any notice given to users. Unfortunately, I have a good friend with some rather unusual tastes who I knew would be caught! He also knew I set up the reporting... To me, the lack of user notification was unethical. What similar experiences have you had that haven't "felt right" and left you questioning what to do? How did you deal with it?

    Read the article

  • Maven changelog plugin with Mercurial problem

    - by doom2.wad
    I have configured my Maven2 project to generate a changelog report from a Mercurial repository (accessible via file:// protocol) but the goal execution fails with the following message: + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'changelog'. [INFO] ------------------------------------------------------------------------ [INFO] Building Phobos3 Prototype [INFO] task-segment: [changelog:changelog] [INFO] ------------------------------------------------------------------------ [INFO] [changelog:changelog {execution: default-cli}] [INFO] Generating changed sets xml to: D:\Documents and Settings\501845922\Workspace\phobos3.prototype\target\changelog.xml [INFO] EXECUTING: hg log --verbose [WARNING] Could not figure out: abort: Invalid argument [ERROR] EXECUTION FAILED Execution of cmd : log failed with exit code: -1. Working directory was: D:\Documents and Settings\501845922\Workspace\phobos3.prototype Your Hg installation seems to be valid and complete. Hg version: 1.4.3+20100201 (OK) [ERROR] Provider message: [ERROR] EXECUTION FAILED Execution of cmd : log failed with exit code: -1. Working directory was: D:\Documents and Settings\501845922\Workspace\phobos3.prototype Your Hg installation seems to be valid and complete. Hg version: 1.4.3+20100201 (OK) [ERROR] Command output: [ERROR] [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An error has occurred in Change Log report generation. Embedded error: An error has occurred during changelog command : Command failed. [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An error has occurred in Change Log report generation. at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An error has occurred in Change Log report generation. at org.apache.maven.reporting.AbstractMavenReport.execute(AbstractMavenReport.java:79) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.reporting.MavenReportException: An error has occurred during changelog command : at org.apache.maven.plugin.changelog.ChangeLogReport.generateChangeSetsFromSCM(ChangeLogReport.java:555) at org.apache.maven.plugin.changelog.ChangeLogReport.getChangedSets(ChangeLogReport.java:393) at org.apache.maven.plugin.changelog.ChangeLogReport.executeReport(ChangeLogReport.java:340) at org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:98) at org.apache.maven.reporting.AbstractMavenReport.execute(AbstractMavenReport.java:73) ... 19 more Caused by: org.apache.maven.plugin.MojoExecutionException: Command failed. at org.apache.maven.plugin.changelog.ChangeLogReport.checkResult(ChangeLogReport.java:705) at org.apache.maven.plugin.changelog.ChangeLogReport.generateChangeSetsFromSCM(ChangeLogReport.java:467) ... 23 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Thu Apr 29 17:10:06 CEST 2010 [INFO] Final Memory: 5M/10M [INFO] ------------------------------------------------------------------------ What did I miss in a configuration? (I hope it is a configuration problem not a Maven plugins related bug!:) My repository URL seems to be ok (the plugin has been complaining before, I fixed that up), I also set a date format for parsing (also been complaining, also fixed). target/changelog.xml being promised was not generated at all. Maven 2.2.1 Mercurial 1.4.3 Windows XP SP3 mvn scm:changelog command provides an expected output. Thanks for any suggestions, I haven't googled up anything (nor binged up;).

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >