Search Results

Search found 11190 results on 448 pages for 'bug report'.

Page 15/448 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • BizTalk &ndash; Routing failure on Delivery Notifications (BizTalk 2006 R2 to 2013)

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/11/11/biztalk-routing-failure-on-delivery-notifications.aspxThis is a detailed explanation of a something I posted a few month ago on stackoverflow, concerning a weird behavior (a bug, really…) of the delivery notifications in BizTalk. Reminder: what are delivery notifications Mechanism BizTalk has the ability to automatically publish positive acknowledgments (ACK) when it has succeeded transmitting a message or negative acknowledgments (NACK) in case of a transmission failure. Orchestrations can use delivery notifications to subscribe to those ACKs and NACKs in order to know if a message sent on a one-way send port has been successfully transmitted. Delivery Notifications can be “activated” in two ways: The most common and easy way is to set the Delivery Notification property of a logical send port (in the orchestration designer) to Transmitted: Another way is to set the BTS.AckRequired context property of the message to be sent to true: NOTE: fundamentally, those methods are strictly equivalent since the fact of setting the Delivery Notification to Transmitted on the send port only tells BizTalk the BTS.AckRequired context property has to be set to true on the outgoing message. Related context properties ACKs and NACKs have a common set of propoted context properties, which are : Propriété Description AckType Equals ACK when successful or NACK otherwise AckID MessageID of the message concerned by the acknowledgment AckOwnerID InstanceID of the instance associated with the acknowledgment AckSendPortID ID of the send port AckSendPortName Name of the send port AckOutboundTransportLocation URI of the send port AckReceivePortID ID of the port the message came from AckReceivePortName Name of the port the message came from AckInboundTransportLocation URI of the port the message came from Detailed behavior The way Delivery Notifications are handled by BizTalk is peculiar compared to the standard behavior of the Message Box: if no active subscription exists for the acknowledgment, it is simply discarded. The direct consequence of this is that there can be no routing failure for an acknowledgment, and an acknowledgment cannot be suspended. Moreover, when a message is sent to a send port where Delivery Notification = Transmitted, a correlation set is initialized and a correlation token is attached to the message (Context property: CorrelationToken). This correlation token will also be attached to the acknowledgment. So when the acknowledgment is issued, it is automatically routed to the source orchestration. Finally, when a NACK is received by the source orchestration, a DeliveryFailureException is thrown, which can be caught in Catch section. Context of the problem Consider this scenario: In an orchestration, Delivery Notifications are activated on a One-Way send port In case of a transmission failure, the messaging instance is suspended and the orchestration catches an exception (DeliveryFailureException). When the exception is caught, the orchestration does some logging and then terminates (thanks to a Terminate shape). So that leaves only the suspended messaging instance, waiting to be resumed. Symptoms Once the problem that caused the transmission failure is solved, the messaging instance is resumed. Considering what was said in the reminder, we would expect the instance to complete, leaving no active or suspended instance. Nevertheless, the result is that the messaging instance is once more suspended, this time because of a routing failure: The routing failure report shows that the suspended message has the following attached properties: Explanation Those properties clearly indicate that the message being suspended is an acknowledgment (ACK in this case), which was published in the message box and was supended because no subscribers were found. This makes sense, since the source orchestration was terminated before we resumed the messaging instance. So its subscription to the acknowledgments was no longer active when the ACK was published, which explains the routing failure. But this behavior is in direct contradiction with what was said earlier: an acknowledgment must be discarded when no subscriber is found and therefore should not be suspended. Cause It is indeed an outright bug, which appeared with the SP1 of BizTalk 2006 R2 and was never corrected since then: not in the next 4 CUs, not in BizTalk 2009, not in 2010 and not event in 2013 – though I haven’t tested CU1 and CU2 for this last edition, but I bet there is nothing to be expected from those CUs (on this particular point). Side effects This bug can have pretty nasty side effects: this behavior can be propagated to other ports, due to routing mechanisms. For instance: you have configured the ESB Toolkit and have activated the “Enable routing failure for failed messages”. The result will be that the ESB Exception SQL send port will also try and publish ACKs or NACKs concerning its own messaging instances. In itself, this is already messy, but remember that those acknowledgments will also have the source correlation token attached to them… See how far it goes? Well, actually there is more: in SQL send ports, transactions will be rolled back because of the routing failure (I guess it also happens with other adapters - like Oracle, but I haven’t tested them). Again, think of what happens when the send port is the ESB Exception send port: your BizTalk box is going mad, but you have no idea since no exception can be written in the exception database! All of this can be tricky to diagnose, I can tell you that… Solution There is no real solution, only a work-around, but it won’t solve all of the problems and side effects. The idea is to create an orchestration which subscribes to all acknowledgments. That is to say: The message type of the incoming message will be XmlDocument The BTS.AckType property exists The logical receive port will use direct binding By doing so, all acknowledgments will be consumed by an instance of this orchestration, thus avoiding the routing failure. Here is an example of what this orchestration could look like: In order not to pollute the HAT and the DTA Db (after all, this orchestration is only meant to be a palliative to some faulty internal BizTalk mechanism, so there should be no trace of its execution), all tracking must be deactivated:

    Read the article

  • Java webapp: how to implement a web bug (1x1 pixel)?

    - by NoozNooz42
    In the accepted answer in the following question, a SO regular with 13K+ rep suggests to use a "web bug" (non-cacheable 1x1 img) to be able to track requests in the logs: http://stackoverflow.com/questions/1784893 How can I do this in Java? Basically, I've got two issues: how to make sure the 1x1 image is not cacheable (how to set the header)? how to make sure the query for these 1x1 image will appear in the logs? I'm looking for exact piece of code because I know how to write a .jsp/servlet and I know how to serve an 1x1 image :) My question is really about the exact .jsp/servlet that I should write and how/what needs to be done so that Tomcat logs the request. For example I plan to use the following mapping: <servlet-mapping> <servlet-name>WebBugServlet</servlet-name> <url-pattern>/webbug*</url-pattern> </servlet-mapping> and then use an img tag referencing a "webbug.png" (or .gif), so how do I write the .jsp/servlet? What/where should I look for in the logs?

    Read the article

  • How to merge two pdf doucment as single report in jasper reoprt?

    - by Kumar
    Hi Friends, I am new to jasper report.I can able to create Simple PDF document with Javabean datasource.In my project i have created two separete pdf document with separate javabeandatasource , Now i want to merge that both document into single document.Can anyone tell me how to merge both document into single document using jasper.

    Read the article

  • How to make a report with count of type of cases in each month in Acess 2010

    - by amir shadaab
    I have a database is access with each record having a date and yes/no type columns for each record which shows which category the record comes under. I want to create a report which shows the types of cases in each month by taking a date range as a parameter through prompts. I have done the prompt part but I'm not sure how the query should be to show values for each month in that date range. Can someone please help me with this?

    Read the article

  • If as assert fails, is there a bug?

    - by RichAmberale
    I've always followed the logic: if assert fails, then there is a bug. Root cause could either be: Assert itself is invalid (bug) There is a programming error (bug) (no other options) I.E. Are there any other conclusions one could come to? Are there cases where an assert would fail and there is no bug?

    Read the article

  • Is reference to bug/issue in commit message considered good practice?

    - by Christian P
    I'm working on a project where we have the source control set up to automatically write notes in the bug tracker. We simply write the bug issue ID in the commit message and the commit message is added as a note to the bug tracker. I can see only a few downsides for this practice. If sometime in the future the source code gets separated from the bug tracking software (or the reported bugs/issues are somehow lost). Or when someone is looking in the history of commits but doesn't have access to our bug tracker. My question is if having a bug/issue reference in the commit message is considered good practice? Are there some other downsides?

    Read the article

  • Pass dataset to subreport with SQL Server Reporting Services

    - by Juliet
    I'm using SQL Server Reporting Services and the report designer that comes with Visual Studio. I've got a really big report. It's actually so large that Visual Studio hangs (sometimes for hours at a time) or just crashes when I make changes. There is preciously little I can do to solve the problem, so I've decided to just move the bottom half of the report into a sub-report. So, I started with one enormous, unresponsive report and ended with two small, manageable reports -- surprisingly, this actually works. One problem: my subreport uses the same data as my main report. Right now, it populates its dataset by re-querying the database. The extra round-trip to the database causes the report to take twice as long to generate; up from 45 minutes to 1 1/2 hours to generate. I'd like to avoid hitting the database again, and instead use the same dataset in both reports. How can I share or pass a dataset between a report and subreport?

    Read the article

  • Pentaho - Reporting Tool - Is the .prpt file (report template file) contains datasource information

    - by Yatendra Goel
    I am new Pentaho Reporting Tool. I have the following question: When I created a report using Pentaho Report Designer, it output a report file having .prpt extension. After that I found an example on internet where the following code were used to display the report in html format:| protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { ResourceManager manager = new ResourceManager(); manager.registerDefaults(); String reportPath = "file:" + this.getServletContext().getRealPath("sampleReport.prpt"); try { Resource res = manager.createDirectly(new URL(reportPath), MasterReport.class); MasterReport report = (MasterReport) res.getResource(); HtmlReportUtil.createStreamHTML(report, response.getOutputStream()); } catch (Exception e) { e.printStackTrace(); } } And the report got printed successfully. So as we haven't specified any datasource information here, I think that the .prpt file contains that information in it. If that's true than Isn't Jasper is better Reporting tool than Pentaho because when we display Jasper reports, we have to provide datasource details also so in that way our report is flexible and is not bound to any particular database.

    Read the article

  • Secondary monitor bug: a problem in WPF or in the graphics driver?

    - by emddudley
    I have discovered a strange bug with my WPF application and I am trying to determine whether it is a problem with WPF or my graphics driver so that I can report it to the appropriate company. I have a Quadro FX 1700 with the latest drivers (197.54) on a Windows XP system, running a .NET 3.5 SP1 application. I have dual monitors, and when I maximize then minimize a child window of the main window on my primary monitor, the child window gets drawn on the secondary monitor as well. It appears in both places. I made a sample application (code is below) which induces this behavior. Start the application and ensure the main window is on your primary monitor. Double-click the main window. A green child window should appear. Click the green child window to maximize. Click the green child window to minimize. Can anyone else reproduce this problem? On my system the green child restores, but then it's drawn on both my primary and secondary monitors, rather than just the primary monitor. App.xaml <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="DualMonitorBug.App" StartupUri="Shell.xaml" /> App.xaml.cs using System.Windows; namespace DualMonitorBug { public partial class App : Application { } } Shell.xaml <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="DualMonitorBug.Shell" Title="Shell" Height="480" Width="640" MouseDoubleClick="ShowDialog" /> Shell.xaml.cs using System.Windows; using System.Windows.Input; namespace DualMonitorBug { public partial class Shell : Window { public Shell() { InitializeComponent(); } private void ShowDialog(object sender, MouseButtonEventArgs e) { DialogWindow dialog = new DialogWindow(); dialog.Owner = this; dialog.Show(); } } } DialogWindow.xaml <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="DualMonitorBug.DialogWindow" Title="Dialog Window" Height="240" Width="320" AllowsTransparency="True" Background="Green" MouseLeftButtonDown="ShowHideDialog" WindowStyle="None" /> DialogWindow.xaml.cs using System.Windows; using System.Windows.Input; namespace DualMonitorBug { public partial class DialogWindow : Window { public DialogWindow() { InitializeComponent(); } private void ShowHideDialog(object sender, MouseButtonEventArgs e) { this.WindowState = (this.WindowState == WindowState.Normal) ? WindowState.Maximized : WindowState.Normal; } } }

    Read the article

  • Xubuntu 13.10 64bit - Slow and buggy "log out" process?

    - by MrKatSwordfish
    I'm a Windows convert who has done only a little bit of dabbling in Ubuntu in the past (back in Dapper Drake a few years back). A lot has changes since then, and I've been yearning to jump back into linux again! So, having just bought a new SSD, I felt that this would be as good of a time as any to set up a dual-boot system again. I've messed around with Ubuntu 13.10 a bit, and while Unity has its issues, I think that it still needs some time to develop. I looked into XFCE and liked it a lot, so I went with Xubuntu. I've installed Xubuntu, and for the most part it's running smoothly and it a pleasure to work with. The customization is great and the minimalistic look and feel is really nice! But here's my problem, whenever I select the "Log Out" option from either the application menu, or the user profiles menu, my PC comes to a crawl, and the dialog box with all the options (shut down, restart, log out, etc.) takes maybe a minute or more to appear. I click the log out button, my PC is brought to a snail's pace, and I have to wait for what seems like an eternity for the logout options to appear! If i try to open something else (even a terminal window) while it's loading the logout options, that other program won't finish loading until the logout screen finally appears. Keep in mind, this is a pretty much vanilla install of Xubuntu 13.10 64bit, on a PC with an intel i7, an SSD, 6gb DDR3 RAM, and a new AMD 7770 gpu (drivers haven't been installed yet, though). Everything else runs fast, most applications open near-instantly! It must be an issue with how the logout options screen initializes or something, but I'm not sure exactly how I can fix it.. Edit - Extra Info: This problem is very consistent when using the "Log Out" buttons in Xubuntu. However, I've found that I'm able to reboot and shutdown much more quickly by going through the "Switch User" screen, and using the reboot or shutdown buttons on that screen. I'm nearly certain that it has something to do with the little Log Out options screen that appears when I select Log Out from the menu, and not the actual process of shutting down.. So what should I do? I really like XFCE so far, and I've never tried a non-ubuntu based distro before, but should I just switch to something else? Is there any known fix for this issue? Are there any work-arounds for logging out/shutting down/rebooting via the terminal so that I can avoid this irritating bug? Is there any that I can monitor the progress of the log out via terminal, allowing me to see which parts are causing the slow-down? What is the best way to report this bug to someone?

    Read the article

  • An open source WPF report engine that gets benefits from the current WPF controls,

    WPF changes the things greatly by providing the capability of printing Visual objects. It persuades many people to setup an open source report engine as in this article.  read moreBy Siyamand AyubiDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Web Security Threats on the Rise, Report Finds

    It may not be Tony Soprano on the Web, but a new security report finds that wise-guy hackers have become increasingly organized....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • AJI Report #12 | Tim Hibbard Talks .NET to iOS Development

    - by Jeff Julian
    In this AJI Report, Jeff and John talk with Tim Hibbard of Engraph Software about making the transition from a .NET developer to mobile applications using the iOS platform. Tim dives into what each experience was like from getting into XCode for the first time, using Third-party tools, Apple's design guidelines, and provisioning an app to the App Store. Tim has been a .NET developer since the framework was released in 2001 and now has two mobile applications in production. Listen to the Show Site: http://engraph.com/ Blog: http://timhibbard.com/blog/ Twitter: @timhibbard

    Read the article

  • USTR's Bully Report Unfairly Blames Canada Again

    <b>Michael Geist:</b> "The U.S. government has released its annual Special 301 report in which it purports to identify those countries with inadequate intellectual property laws. Given the recent history and the way in which the list is developed, it will come as no surprise that the U.S. is again implausibly claiming that Canada is among the worst of the worst"

    Read the article

  • Failing report subscriptions

    - by DavidWimbush
    We had an interesting problem while I was on holiday. (Why doesn't this stuff ever happen when I'm there?) The sysadmin upgraded our Exchange server to Exchange 2010 and everone's subscriptions stopped. My Subscriptions showed an error message saying that the email address of one of the recipients is invalid. When you create a subscription, Reporting puts your Windows user name into the To field and most users have no permissions to edit it. By default, Reporting leaves it up to exchange to resolve that into an email address. This only works if Exchange is set up to translate aliases or 'short names' into email addresses. It turns out this leaves Exchange open to being used as a relay so it is disabled out of the box. You now have three options: Open up Exchange. That would be bad. Give all Reporting users the ability to edit the To field in a subscription. a) They shouldn't have to, it should just work. b) They don't really have any business subscribing anyone but themselves. Fix the report server to add the domain. This looks like the right choice and it works for us. See below for details. Pre-requisites: A single email domain name. A clear relationship between the Windows user name and the email address. eg. If the user name is joebloggs, then joebloggs@domainname needs to be the email address or an alias of it. Warning: Saving changes to the rsreportserver.config file will restart the Report Server service which effectively takes Reporting down for around 30 seconds. Time your action accordingly. Edit the file rsreportserver.config (most probably in the folder ..\Program Files[ (x86)]\Microsoft SQL Server\MSRS10_50[.instancename]\Reporting Services\ReportServer). There's a setting called DefaultHostName which is empty by default. Enter your email domain name without the leading '@'. Save the file. This domain name will be appended to any destination addresses that don't have a domain name of their own.

    Read the article

  • A Simple but Robust WPF Report Engine, Part 1

    a simple WPF report engine that is robust and one can use it in practical applications  read moreBy siyamand ayubiDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why using alt-tab causes Unity to restart?

    - by blade19899
    This have happened to me 4 times. When I use alt-tab Unity resets. It doesn't close any apps, but when I click an app that was already open - but minimized - I only can see the minimize/maximize/close buttons and a thin border around the window, it but nothing in it, as on the following picture: This doesn't just happen to one app, but to all my opened minimized apps. I also want to ask if I should file a bug report?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >