Search Results

Search found 4460 results on 179 pages for 'ssrs reports'.

Page 49/179 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Google Analytics + External Site Statistics Tracking in one application?

    - by Soleil
    My company is a broker in the real estate industry. As such, we send a lot of our listings to sites like Trulia.com and Zillow.com, among others. These sites direct leads to our realtors, and provide us with reports every month detailing the activity our listings have had on their site-- links back to our website, emails generated, etc. Our Marketing and Advertising departments want to take that information and enter it into a system to keep track of everything in one place, for the purpose of producing comparison reports. I cannot find any externally available product that provides this functionality. I would sincerely like to avoid writing this tool myself. Does anyone know of a tool that could do this? In short, an ideal system would: Imports Google Analytics data via API Imports real estate listing site data via CSV import / manual entry Provides comparison reports based on data Does anyone know of anything pre-made that can do this?

    Read the article

  • Printing Reporting Services in a page throught Javascript

    - by Gabriel Guimarães
    I Have a PerformancePoint Server 2007 Dashboard in a Sharepoint 2007 page. In my Sharepoint page, there's 2 Filters who get passed to the Report, and I need to print this report in the page (in another button, not the SSRS one). So what I need is a javascript method that calls the SSRS print button, which is on a named DIV, inside a WebPartZone that only have one WebPart, a PerformancePoint Dashboard Item (don't know the exact name of the webpart).

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • How to eager load in WCF Ria Services/Linq2SQLDomainModel

    - by Aggelos Mpimpoudis
    I have a databound grid at my view (XAML) and the Itemsource points to a ReportsCollection. The Reports entity has three primitives and some complex types. These three are shown as expected at datagrid. Additionally the Reports entity has a property of type Store. When loading Reports via GetReports domain method, I quickly figure out that only primitives are returned and not the whole graph of some depth. So, as I wanted to load the Store property too, I made this alteration at my domain service: public IQueryable<Report> GetReports() { return this.ObjectContext.Reports.Include("Store"); } From what I see at the immediate window, store is loaded as expected, but when returned to client is still pruned. How can this be fixed? Thank you!

    Read the article

  • Symbolic Link in Tomcat 5 is not working after setting allowLinking="true"

    - by Jais
    I created a symbolic link to a .htm file. ln -sf /u105/app/ptelrep/reports/kfx/generated/KFX026_Wholesale_1.htm /u105/app/ptelrep/reports/kfx/generated/KFX026_Wholesale.htm The link works. I have the configuration file, reports.xml: /u355/app/ptelrep/tomcat50-jwsdp/conf/Catalina/localhost/reports.xml I have a JSP file which has link points to the symbolic link: onMouseOver="toggleDiv('div1',1)" onMouseOut="toggleDiv('div1',0)"Detail Usage I restarted the tomcat but the symlink is not working. When I change the symlink to a point to the original file, it works. Can anyone know what the problem is?

    Read the article

  • IIS reset details

    - by Raj
    What exactly happens when we do IISreset? What resources get released? We have an ASP.Net website (.net 1.1) which use Crystal reports 11. Lately, running reports are throwing several crystal report specific exceptions and then the users can't run reports anymore. Resetting IIS lets the users log back in and run the reports until it fails the next time. Knowing exactly what resources are released when IIS is reset will help us dig deeper to find the root cause. Any help?

    Read the article

  • Specific query in Mysql

    - by Radhe
    I have two tables reports and holidays. reports: (username varchar(30),activity varchar(30),hours int(3),report_date date) holidays: (holiday_name varchar(30), holiday_date date) select * from reports gives +----------+-----------+---------+------------+ | username | activity | hours | date | +----------+-----------+---------+------------+ | prasoon | testing | 3 | 2009-01-01 | | prasoon | coding | 4 | 2009-01-03 | | prasoon | designing| 2 | 2009-01-04 | | prasoon | coding | 4 | 2009-01-06 | +----------+-----------+---------+------------+ select * from holidays gives +--------------+---------------+ | holiday_name | holiday_date | +--------------+---------------+ | Diwali | 2009-01-02 | | Holi | 2009-01-05 | +--------------+---------------+ Is there any way by which I can output the following? +-------------+-----------+---------+-------------------+ | date | activity | hours | holiday_name | +-------------+-----------+---------+-------------------+ | 2009-01-01 | testing | 3 | | | 2009-01-02 | | | Diwali | | 2009-01-03 | coding | 4 | | | 2009-01-04 | designing| 2 | | | 2009-01-05 | | | Holi | | 2009-01-06 | coding | 4 | | +-------------+-----------+---------+-------------------+

    Read the article

  • What is the best solution to do Reporting on Object data for .NET ?

    - by Peter Fox
    Hi, Our projects are using objects as the data source to reports. Our business layer is returning single objects or IEnumerable. Our reports (quite complex) need to display value-type properties of the object, and its related objects. Typical case would be, from a List, display a master report with category data, then a subreport with data for each Product inside each Category, then a subreport for each Part of each Product, and so on. Reporting from the database is not an option for us. We have tried so far - Reporting Services : works but have to mess around with the XML definition of the report to define the datasource classes, very hard to work with if you use an object datasource, architecturally not too clean - Telerik Reports : quite nice (esp., nice architecture) but seems to have problems with complex reports (master/sub), does not give great paging control, rumored to have performance/crash problems (immature product). Does anyone know a good reporting solution that can be integrated in an ASP.NET application and works well with objects as datasources ?

    Read the article

  • Need help with a SQL CTE Query

    - by Chuck
    I have a table that I need to get some specific data from for a view. Here's the base table structure with some sample data: | UserID | ReportsToUserID | Org ID | ------------------------------------- | 1 | NULL | 1 | ------------------------------------- | 2 | 1 | 1 | ------------------------------------- | 3 | 2 | 1 | ------------------------------------- | 4 | 3 | 1 | ------------------------------------- The users will be entering reports and users can see the reports of users who report to them and any users who report to those users. Users who report to no one can see everything in their organization Given my sample data above, user 1 can see the reports of 2, 3, & 4; user 2 can see the reports of 3 & 4; and user 3 can see the reports of 4. For the view, I'd like to have the data returned as follows: | UserID | CanSeeUserID | OrgID | -------------------------------------------- | 1 | 2 | 1 | -------------------------------------------- | 1 | 3 | 1 | -------------------------------------------- | 1 | 4 | 1 | -------------------------------------------- | 2 | 3 | 1 | -------------------------------------------- etc... Below is my current code, any help is greatly appreciated. WITH CTEUsers (UserID, CanSeeUserID, OrgID) AS ( SELECT e.ID, e.ReportsToUserID, e.OrgID FROM Users e WITH(NOLOCK) WHERE COALESCE(ReportsToUserID,0) = 0 --ReportsToUserID can be NULL or 0 UNION ALL SELECT e.ReportsToUserID, e.ID,e.OrgID FROM Users e WITH(NOLOCK) JOIN CTEUsers c ON e.ID = c.UserID ) SELECT * FROM CTEUsers

    Read the article

  • Report generation in PHP (formats required pdf,xls,doc,csv)

    - by Ish Kumar
    I need to generate reports in my PHP website (in zend framework) Formats required: PDF (with tables & images) // presently using Zend_Pdf XLS (with tables & images) DOC (with tables & images) CSV (only tables) Please recommend robust and fast solution for generating reports in PHP. Platform: Zend Framework on LAMP I know there are some tricky solutions for creating such reports, i wonder is there any open source report generation utility that can be used with LAMP environment

    Read the article

  • Query in sql involving joins of two table

    - by Satish
    I have two tables reports and holidays. reports: (username varchar(30),activity varchar(30),hours int(3),report_date date) holidays: (holiday_name varchar(30), holiday_date date) select * from reports gives +----------+-----------+---------+------------+ | username | activity | hours | date | +----------+-----------+---------+------------+ | prasoon | testing | 3 | 2009-01-01 | | prasoon | coding | 4 | 2009-01-03 | | prasoon | coding | 4 | 2009-01-06 | | prasoon | coding | 4 | 2009-01-10 | +----------+-----------+---------+------------+ select * from holidays gives +--------------+---------------+ | holiday_name | holiday_date | +--------------+---------------+ | Diwali | 2009-01-02 | | Holi | 2009-01-05 | +--------------+---------------+ Is there any way by which I can output the following? +-------------+-----------+---------+-------------------+ | date | activity | hours | holiday_name | +-------------+-----------+---------+-------------------+ | 2009-01-01 | testing | 3 | | | 2009-01-02 | | | Diwali | | 2009-01-03 | coding | 4 | | | 2009-01-04 | Absent | Absent | | | 2009-01-05 | | | Holi | | 2009-01-06 | coding | 4 | | | 2009-01-07 | Absent | Absent | | | 2009-01-08 | Absent | Absent | | | 2009-01-09 | Absent | Absent | | | 2009-01-10 | coding | 4 | | +-------------+-----------+---------+-------------------+ In other words I want to fill the activity and hours columns with "Absent" on the dates which are neither in table reports nor in table holidays. How can I write a specific query for it. The query should give the output between two specific dates.

    Read the article

  • Render asp.net repeater as-and-when chunks of html is ready (streaming)

    - by sash
    The problem is to render huge reports. Basically we have lots of data that get rendered in html reports (mostly using repeater or gridview). As it so happens, the data started out small and now we have tons of it. There are lots of such reports already built so total rewrite is not an option. Heck, the business is not even letting us page the data. Now server memory is shooting up each time we try to render some reports. So the question is - is there some way we can bind data to repeater and have it stream html to browser as and when chunks are ready? That way we hope to not bring all that data into app server at once. I'm thinking we'll use a datareader or something to get parts of data and render it to browser. Any pointers, links, ideas?

    Read the article

  • SQL query for an access database needed

    - by masfenix
    Hey guys, first off all sorry, i can't login using my yahoo provider. anyways I have this problem. Let me explain it to you, and then I'll show you a picture. I have a access db table. It has 'report id', 'recpient id', and 'recipient name' and 'report req'. What the table "means" is that do the user using that report still require it or can we decommission it. Here is how the data looks like (blocked out company userids and usernames): *check the link below, I cant post pictures cuz yahoo open id provider isnt working. So basically I need to have 3 select queries: 1) Select all the reports where for each report, ALL the users have said no to 'reportreq'. In plain English, i want a listing of all the reports that we have to decommission because no user wants it. 2) Select all the reports where the report is required, and the batchprintcopy is more then 0. This way we can see which report needs to be printed and save paper instead of printing all the reports. 3)A listing of all the reports where the reportreq field is empty. I think i can figure this one out myself. This is using Access/VBA and the data will be exported to an excel spreadsheet. I just a simple query if it exists, OR an alogorithm to do it quickly. I just tried making a "matrix" and it took about 2 hours to populate. https://docs.google.com/uc?id=0B2EMqbpeBpQkMTIyMzA5ZjMtMGQ3Zi00NzRmLWEyMDAtODcxYWM0ZTFmMDFk&hl=en_US

    Read the article

  • tfs 2010 RC Agile Process template update New Task progress report

    Maybe my next post will just be about why I am so excited and impressed with the out of the box templates.  But, for this first blog with my new focus, I thought I would just walk through the process I went through to create a task progress report (to enhance the out of the box Agile template). So, I started with the MSF for Agile Development 5.0 RC template.  After reviewing the template, I came away pretty excited about many of the new reports.  I am especially excited about the reporting services reports.  The big advantage I see here is that these are querying the Warehouse directly instead of the Analysis Services Cube which means that they are much closer to real-time which I find very important for reports like Burndown and task status.  One report that I focused on right away was the User Story Progress Report.  An overview is shown below: This report is very useful, but a lot of our internal managers really prefer to manage at the task level and either dont have stories in TFS or would like to view this type of report for tasks in addition to the User Stories.  So, what did I do? Step 1: Download the Agile Template In VS 2010 RC, open Process Template Manager from Team->Team Project Collection Settings.  Download the MSF for Agile Development template to your local file system.  A project template is a folder of xml files.  There is a ProcessTemplate.xml in the root and then a bunch of directories for things like Work Item Definitions and Queries, Reports, Shared Documents and Source Control Settings.  Step 2: Copy the folder My plan here is to make a new template with all of my modifications.  You can also just enhance update the MSF template.  However, I think it is cleaner when you start making modifications to make your own template.  So, copy the folder and name it with your new template name. Step 3: Change Template Name Open ProcessTemplate.xml and change the <name> of the template. Step 4: Copy the rdl of the Report you want to use a starting point In my case, I copied Stories Progress.rdl and named the file Task Progress Breakdown.rdl.  I reviewed the requirements for the new report with some of the users here and came up with this plan.  Should show tasks and be expandable to show subtasks.  Should add Assigned To and Estimated Finish Date as 2 extra columns. Step 5: Walkthrough the existing report to understand how it works The main thing that I do here is try to get the sql to run in SQL Management Studio.  So, I can walkthrough the process of building up the data for the report. After analyzing this particular report I found a couple of very useful things.  One, this report is already built to display subtasks if I just flip the IncludeTasks flag to 1.  So, if you are using Stories and have tasks assigned to each story.  This might give you everything you want.  For my purposes, I did make that change to the Stories Progress report as I find it to be a more useful report to be able to see the tasks that comprise each story.  But, I still wanted a task only version with the additional fields. Step 6: Update the report definition I tend to work on rdl in visual studio directly as xml.  Especially when I am just altering an existing report, I find it easier than trying to deal with the BI Studio designer.  For my report I made the following changes. Updated Fields Removed Stack Rank and Replaced with Priority since we dont use Stack Rank Added FinishDate and AssignedTo Changed the root deliverable SQL to pull @tasks instead of @deliverablecategory and added a join CurrentWorkItemView for FinishDate and Assigned to SELECT cwi.[System_Id] AS ID FROM [CurrentWorkItemView] cwi             WHERE cwi.[System_WorkItemType] IN (@Task)             AND cwi.[ProjectNodeGUID] = @ProjectGuid SELECT lh.SourceWorkItemID AS ID FROM FactWorkItemLinkHistory lh             INNER JOIN [CurrentWorkItemView] cwi ON lh.TargetWorkItemID = cwi.[System_Id]             WHERE lh.WorkItemLinkTypeSK = @ParentWorkItemLinkTypeSK                 AND lh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND lh.TeamProjectCollectionSK = @TeamProjectCollectionSK                 AND cwi.[System_WorkItemType] NOT IN (@DeliverableCategory) Added AssignedTo and FinishDate columns to the @Rollups table Added two columns to the table used for column headers <Tablix Name="ProgressTable">         <TablixBody>           <TablixColumns>             <TablixColumn>               <Width>2.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.5125in</Width>             </TablixColumn>             <TablixColumn>               <Width>3.4625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>           </TablixColumns> Added Cells for the two new headers Added Cells to the data table to include the two new values (Assigned to & Finish Date) Changed a bunch of widths that would change the format of the report to display landscape and have room for the two additional columns Set the Value of the IncludeTasks Parameter to 1 <ReportParameter Name="IncludeTasks">       <DataType>Integer</DataType>       <DefaultValue>         <Values>           <Value>=1</Value>         </Values>       </DefaultValue>       <Prompt>IncludeTasks</Prompt>       <Hidden>true</Hidden>     </ReportParameter> Change a few descriptions on how the report should be used This is the resulting report I have attached the final rdl. Step 7: Update ReportTasks.xml Last step before the template is ready for use is to update the reportTasks.xml file in the reports folder.  This file defines the reports that are available in the template.           <report name="Task Progress Breakdown" filename="Reports\Task Progress Breakdown.rdl" folder="Project Management" cacheExpiration="30">             <parameters>               <parameter name="ExplicitProject" value="" />             </parameters>             <datasources>               <reference name="/Tfs2010ReportDS" dsname="TfsReportDS" />             </datasources>           </report> Step 8: Upload the template Open the process Template Manager just like Step 1.  And upload the new template. Thats it.  One other note, if you want to add this report to existing team project you will have to go into reportmanager (the reporting services portal) and upload the rdl to that projects directory.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only The hell you are: [root@localhost ~]# mount -o remount,rw /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • DNS Issue Windows 2003 AD-The server holding the PDC role is down

    - by Dave M
    Our network of Windows 2003 and Windows 2008 servers suddenly hasDNS issues. There are 7 DCs. Two at our main office and one each at branch sites (one branch has two a 2008R2 and WIN2K3) Only two are WIN2008R2 Running DCDIAG on the WIN2K3 at main site (DC1) reports no issues. Running at any branch site reports two issues All other test pass. The server DC1 can be PINGed by name from any site Starting test: frsevent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: FsmoCheck Warning: DcGetDcName(PDC_REQUIRED) call failed, error 1355 A Primary Domain Controller could not be located. The server holding the PDC role is down. Netdom.exe /query DC reports the expected servers. netdom query fsmo This reports the server at the main office holds the following roles: * Schema owner Domain role owner PDC role RID pool manager Infrastructure owner In the DNS management snap-in, DC1 appears as DNS server but does not appear in _msdcs-dc-_sites-Default-First-Site-Name-_TCP There is no _ldap or –kerberos record pointing to DC1 Same issue msdcs-dc-_sites- -_TCP Again there is no _ldap or –kerberos record pointing to DC1 Under Domain DNS Zones there is no entry for the server. This is the case for any _tcp folder in the DNS. The server DC1 appears correctly as a name server in the Reverse Lookup Zone. There is a Host(A) record for DC1 but in the Forward Lookup Zone there is no (same as parent folder) Host(A) for the DC1 server but such an entry exists for the other DCs at branch sites and the other DC at the main office. We have tried stopping and starting the netlogon service, restarting DNS and also dcdiag /fix. Netdiag reports error: Trust relationship test. . . . . . : Failed [FATAL] Secure channel to domain 'XXX' is broken. [ERROR_NO_LOGON_SERVERS] [WARNING] Failed to query SPN registration on DC- One entry for each branch DC All braches lsit the problem server and it can be Pinged by name from any branch Fixing is number one priority but also would like to determine the casue.

    Read the article

  • Adobe Flash not working in 12.04

    - by catnthehat
    I cannot get Adobe Flash working in either Firefox or Chrome. I have tried the Flash-Aid plug in for Firefox but it has not made any difference. As far as I can see, Flash installs without error and Firefox thinks it can run Flash but (for example) YouTube just shows a blank square where the movie should be me. Chrome reports "missing plugin". about:plugins in Firefox reports: Shockwave Flash File: libflashplayer.so Version: Shockwave Flash 11.2 r202 MIME Type Description Suffixes application/x-shockwave-flash Shockwave Flash swf application/futuresplash FutureSplash Player spl

    Read the article

  • SQL Azure Reporting Limited CTP Arrived

    - by Shaun
    It’s about 3 months later when I registered the SQL Azure Reporting CTP on the Microsoft Connect after TechED 2010 China. Today when I checked my mailbox I found that the SQL Azure team had just accepted my request and sent the activation code over to me. So let’s have a look on the new SQL Azure Reporting.   Concept The SQL Azure Reporting provides cloud-based reporting as a service, built on SQL Server Reporting Services and SQL Azure technologies. Cloud-based reporting solutions such as SQL Azure Reporting provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead for report servers; and secure access, viewing, and management of reports. By using the SQL Azure Reporting service, we can do: Embed the Visual Studio Report Viewer ADO.NET Ajax control or Windows Form control to view the reports deployed on SQL Azure Reporting Service in our web or desktop application. Leverage the SQL Azure Reporting SOAP API to manage and retrieve the report content from any kinds of application. Use the SQL Azure Reporting Service Portal to navigate and view the reports deployed on the cloud. Since the SQL Azure Reporting was built based on the SQL Server 2008 R2 Reporting Service, we can use any tools we are familiar with, such as the SQL Server Integration Studio, Visual Studio Report Viewer. The SQL Azure Reporting Service runs as a remote SQL Server Reporting Service just on the cloud rather than on a server besides us.   Establish a New SQL Azure Reporting Let’s move to the windows azure deveploer portal and click the Reporting item from the left side navigation bar. If you don’t have the activation code you can click the Sign Up button to send a requirement to the Microsoft Connect. Since I already recieved the received code mail I clicked the Provision button. Then after agree the terms of the service I will select the subscription for where my SQL Azure Reporting CTP should be provisioned. In this case I selected my free Windows Azure Pass subscription. Then the final step, paste the activation code and enter the password of our SQL Azure Reporting Service. The user name of the SQL Azure Reporting will be generated by SQL Azure automatically. After a while the new SQL Azure Reporting Server will be shown on our developer portal. The Reporting Service URL and the user name will be shown as well. We can reset the password from the toolbar button.   Deploy Report to SQL Azure Reporting If you are familiar with SQL Server Reporting Service you will find this part will be very similar with what you know and what you did before. Firstly we open the SQL Server Business Intelligence Development Studio and create a new Report Server Project. Then we will create a shared data source where the report data will be retrieved from. This data source can be SQL Azure but we can use local SQL Server or other database if it opens the port up. In this case we use a SQL Azure database located in the same data center of our reporting service. In the Credentials tab page we entered the user name and password to this SQL Azure database. The SQL Azure Reporting CTP only available at the North US Data Center now so that the related SQL Server and hosted service might be better to select the same data center to avoid the external data transfer fee. Then we create a very simple report, just retrieve all records from a table named Members and have a table in the report to list them. In the data source selection step we choose the shared data source we created before, then enter the T-SQL to select all records from the Member table, then put all fields into the table columns. The report will be like this as following In order to deploy the report onto the SQL Azure Reporting Service we need to update the project property. Right click the project node from the solution explorer and select the property item. In the Target Server URL item we will specify the reporting server URL of our SQL Azure Reporting. We can go back to the developer portal and select the reporting node from the left side, then copy the Web Service URL and paste here. But notice that we need to append “/reportserver” after pasted. Then just click the Deploy menu item in the context menu of the project, the Visual Studio will compile the report and then upload to the reporting service accordingly. In this step we will be prompted to input the user name and password of our SQL Azure Reporting Service. We can get the user name from the developer portal, just next to the Web Service URL in the SQL Azure Reporting page. And the password is the one we specified when created the reporting service. After about one minute the report will be deployed succeed.   View the Report in Browser SQL Azure Reporting allows us to view the reports which deployed on the cloud from a standard browser. We copied the Web Service URL from the reporting service main page and appended “/reportserver” in HTTPS protocol then we will have the SQL Azure Reporting Service login page. After entered the user name and password of the SQL Azure Reporting Service we can see the directories and reports listed. Click the report will launch the Report Viewer to render the report.   View Report in a Web Role with the Report Viewer The ASP.NET and Windows Form Report Viewer works well with the SQL Azure Reporting Service as well. We can create a ASP.NET Web Role and added the Report Viewer control in the default page. What we need to change to the report viewer are Change the Processing Mode to Remote. Specify the Report Server URL under the Server Remote category to the URL of the SQL Azure Reporting Web Service URL with “/reportserver” appended. Specify the Report Path to the report which we want to display. The report name should NOT include the extension name. For example my report was in the SqlAzureReportingTest project and named MemberList.rdl then the report path should be /SqlAzureReportingTest/MemberList. And the next one is to specify the SQL Azure Reporting Credentials. We can use the following class to wrap the report server credential. 1: private class ReportServerCredentials : IReportServerCredentials 2: { 3: private string _userName; 4: private string _password; 5: private string _domain; 6:  7: public ReportServerCredentials(string userName, string password, string domain) 8: { 9: _userName = userName; 10: _password = password; 11: _domain = domain; 12: } 13:  14: public WindowsIdentity ImpersonationUser 15: { 16: get 17: { 18: return null; 19: } 20: } 21:  22: public ICredentials NetworkCredentials 23: { 24: get 25: { 26: return null; 27: } 28: } 29:  30: public bool GetFormsCredentials(out Cookie authCookie, out string user, out string password, out string authority) 31: { 32: authCookie = null; 33: user = _userName; 34: password = _password; 35: authority = _domain; 36: return true; 37: } 38: } And then in the Page_Load method, pass it to the report viewer. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: ReportViewer1.ServerReport.ReportServerCredentials = new ReportServerCredentials( 4: "<user name>", 5: "<password>", 6: "<sql azure reporting web service url>"); 7: } Finally deploy it to Windows Azure and enjoy the report.   Summary In this post I introduced the SQL Azure Reporting CTP which had just available. Likes other features in Windows Azure, the SQL Azure Reporting is very similar with the SQL Server Reporting. As you can see in this post we can use the existing and familiar tools to build and deploy the reports and display them on a website. But the SQL Azure Reporting is just in the CTP stage which means It is free. There’s no support for it. Only available at the North US Data Center. You can get more information about the SQL Azure Reporting CTP from the links following SQL Azure Reporting Limited CTP at MSDN SQL Azure Reporting Samples at TechNet Wiki You can download the solutions and the projects used in this post here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL Server Management Data Warehouse - quick tour on setting health monitoring policies

    - by ssqa.net
    Profiler, Perfmon, DMVs & scripts are legendary tools for a DBA to monitor the SQL arena. In line with these tools SQL Server 2008 throws a powerful stream with policy based management (PBM) framework & management data warehouse (MDW) methods, which is a relational database that contains the data that is collected from a server that is a data collection target. This data is used to generate the reports for the System Data collection sets, and can also be used to create custom reports. .....(read more)

    Read the article

  • The SQL Server Reporting Services SDK for PHP Debuts

    - by The Official Microsoft IIS Site
    Microsoft has just released the SQL Server Reporting Services SDK for PHP, which enables PHP developers to easily create reports and integrate them in their web applications. The SDK offers a simple Application Programming Interface to interoperate with SQL Server Reporting Services, Microsoft's Reporting and Business Intelligence solution. Developers will be able to use the SDK to perform common operations like listing reports in PHP applications, providing custom report parameters from a PHP...(read more)

    Read the article

  • Apache log analyzer which manages time spent to serve the request

    - by antispam
    I need to monitor performance in my web server (there's an application server in the back) and create reports for senior management. I've enabled %T/%D in my Apache logs and I would like to know if there's an Apache log analyzer or some other tool which parses these values and manages them showing charts or reports. I am looking mostly for an integrated solution and not in the line of awk+gnuplot scripts.

    Read the article

  • Generating Report for NUnit

    - by thangchung
     All source codes for this post can be found at my github.Time ago, I received a request that people ask me how they can generate reports of the results of testing using NUnit? In fact, I may never do this. In the little world of my programming, I only care about the test results, red-green-refactoring, and that was it. When I got that question quite a bit unexpected, I knew that I could use NCover to generate reports, but reports of NCover too simple, it did not give us more details on the number of test cases, test methods, ... And I began to see about creating interesting report for NUnit.I was lucky to find an open source here. Its authors call it NUnit2Report, but one disadvantage is it only running on .NET 1.0. Indeed too old compared to the current version 4.0. And I try to download the preview, but I could not run. I had to open its source code and found that it uses XSLT to convert the output of NUnit results from XML to HTML. Nothing really special, because I also knew that after NUnit run output file extension is XML is created. Author only use this file to convert to HTML using XSLT. And I decided to convert it to. NET 4.0, because I will not have to code from scratch. Conversion work made me take some time, but was lucky that I finally have what I want. Thanks Gilles for the this OSS. I will send a mail to thank him for his efforts but put this out for the OSS. Now I will show people how to do it. I used the auto built NAnt and NUnit for running TestCase, and I use Selenium testing framework. After writing three TestCase using Selenium, I ran NUnit, and got the following results: There are 1 fail and 2s success. In the bin directory of this project will have the NUnit output file as shown below: Then I create a build file, and a bat file for easy running (can use PowerShell is here also.) Double click in the bat file to create a report like this:       Finally open the index.html file in the folder to view report. As everyone can see, it is the TestCase and divide very clearly, that I meet the requirements. This is really good. Once again I really thank NUnit2Report from Gilles. People can contact him via the mail address [email protected] or website  http://nunit2report.sourceforge.net. It really is useful to those who promised to QA. Hopefully this post will help anyone really interested in doing reports for NUnit.   

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >