Search Results

Search found 105845 results on 4234 pages for 'asp net dynamic data'.

Page 465/4234 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • problem in displays data in one page

    - by user318068
    hi ,,,,, I have a problem in the following code ... The following code works as follows displays the invites for each member so that if he had five invite from supposed to be displayed all on one page But before you code that does not function Proper image is the only display one invite on the page and until the approval or rejection of the invitation displays the invite the other .... But this is not my want to offer all on one page I wish I could solve the problem and I can view all calls in one page I think that the problem is in the order code I think that the problem is in the order code my code : <?php session_start(); if (!isset($_SESSION['user_id'])) { header("Location: login.php"); } $id=$_SESSION['user_id']; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <center> <?php include("connect.php"); $sql =mysql_query("select * from ninvite where recieverMemberID ='$id' and viwed= '0'"); $num =mysql_num_rows($sql); echo $num ; if ($num>0) { while($row=mysql_fetch_array($sql)) { $sender=$row['SenderMemberID']; $room=$row['RoomID']; $sql =mysql_query("select MemberName from members where MemberID ='$sender' "); $sql1 =mysql_query("select RoomName from rooms where RoomID ='$room' "); while($row=mysql_fetch_array($sql)) {$mem =$row['MemberName']; } while($rows=mysql_fetch_array($sql1)) { $Ro =$rows['RoomName']; ?> <form action="join.php" method="post"> <label> </label> <br/> <label> <?php echo " you have invite from $mem to join $Ro"; ?> </label> <br/><br/> <label>accept</label> <input name="radio1" type="radio" value="accpet" /> <label>reject</label> <input name="radio1" type="radio" value="Reject" /><br/> <input type="submit" name="submit" value="done" /> </form> <?php } } } ?> </center> </body> </html> thanks alot. my SQl -- phpMyAdmin SQL Dump -- version 3.2.4 -- http://www.phpmyadmin.net -- Host: localhost -- Generation Time: May 07, 2010 at 12:50 ? -- Server version: 5.1.41 -- PHP Version: 5.3.1 SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT /; /!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS /; /!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION /; /!40101 SET NAMES utf8 */; -- -- Database: tr -- -- Table structure for table joinroom CREATE TABLE IF NOT EXISTS joinroom ( MemberID int(10) NOT NULL, RoomID int(10) NOT NULL, PRIMARY KEY (MemberID,RoomID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; -- -- Dumping data for table joinroom INSERT INTO joinroom (MemberID, RoomID) VALUES (28, 1); -- -- Table structure for table members CREATE TABLE IF NOT EXISTS members ( MemberID int(10) unsigned NOT NULL AUTO_INCREMENT, MemberName varchar(20) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberPass varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberEmail varchar(30) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberLocation text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberImg text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (MemberID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=34 ; -- -- Dumping data for table members INSERT INTO members (MemberID, MemberName, MemberPass, MemberEmail, MemberLocation, MemberImg) VALUES (28, 'marwa', '1234', '[email protected]', 'mmmmmm', 'dddddddddd'), (29, 'nora', '1234', '[email protected]', 'fffffffffffgg', 'gggggggggggggg'), (30, 'soso', '1234', '[email protected]', 'ffffffff', 'kkkkkkkkkkkkkkkkkk'), (31, 'gege', '1234', '[email protected]', 'kkkkkkkkkkkkkkkk', 'uuuuuuuuuuuuuuuuu'), (32, 'nono', '1234', '[email protected]', 'ggggggggggggaaaaa', 'aaaaaaaaaaaaaaa'), (33, 'nda', '1234', '[email protected]', 'kkkkkkkkkkkkkkkk', 'ooooooooooooooo'); -- -- Table structure for table ninvite CREATE TABLE IF NOT EXISTS ninvite ( SenderMemberID int(11) NOT NULL AUTO_INCREMENT, recieverMemberID varchar(30) NOT NULL, RoomID int(11) NOT NULL, viwed int(11) NOT NULL, PRIMARY KEY (SenderMemberID,recieverMemberID,RoomID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=33 ; -- -- Dumping data for table ninvite INSERT INTO ninvite (SenderMemberID, recieverMemberID, RoomID, viwed) VALUES (28, '33', 1, 0), (28, '32', 1, 0), (28, '31', 1, 0); /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT /; /!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS /; /!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;

    Read the article

  • How to retrieve ID of button clicked within usercontrol on Asp.net page?

    - by Shawn Gilligan
    I have a page that I am working on that I'm linking multiple user controls to. The user control contains 3 buttons, an attach, clear and view button. When a user clicks on any control on the page, the resulting information is "dumped" into the last visible control on the page. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="Default" MasterPageFile="DefaultPage.master" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajaxToolkit" %> <%@ Register tagName="FileHandler" src="FileHandling.ascx" tagPrefix="ucFile" %> <asp:Content ID="Content1" ContentPlaceHolderID="Main" Runat="Server"> <asp:UpdatePanel ID="upPanel" UpdateMode="Conditional" runat="server"> <ContentTemplate> <table> <tr> <td> <ucFile:FileHandler ID="fFile1" runat="server" /> </td> <td> <ucFile:FileHandler ID="fFile2" runat="server" /> </td> </tr> </table> </ContentTemplate> </asp:UpdatePanel> </asp:Content> All file handling and processing is handled within the control, with an event when the upload to the file server is complete via a file name that was generated. When either button is clicked, the file name is always stored internal to the control in the last control's text box. Control code: <table style="width: 50%;"> <tr style="white-space: nowrap;"> <td style="width: 1%;"> <asp:Label runat="server" ID="lblFile" /> </td> <td style="width: 20%;"> <asp:TextBox ID="txtFile" CssClass="backColor" runat="server" OnTextChanged="FileInformationChanged" /> </td> <td style="width: 1%"> <%--<asp:Button runat="server" ID="btnUpload" CssClass="btn" Text="Attach" OnClick="UploadFile"/>--%> <input type="button" id="btnUpload" class="btn" tabindex="30" value="Attach" onclick="SetupUpload();" /> </td> <td style="width: 1%"> <%--<asp:Button runat="server" ID="btnClear" Text="Clear" CssClass="btn" OnClick="ClearTextValue"/>--%> <input type="button" id="btnClearFile" class="btn" value="Clear" onclick="document.getElementById('<%=txtFile.ClientID%>').value = '';document.getElementById('<%=hfFile.ClientID%>').value = '';" /> </td> <td style="width: 1%"> <a href="#here" onclick="ViewLink(document.getElementById('<%=hfFile.ClientID%>').value, '')">View</a> </td> <td style="width: 1%"> <asp:HiddenField ID="hfFile" runat="server" /> </td> </tr> </table> <script type="text/javascript"> var ItemPath = ""; function SetupUpload(File) { ItemPath = File; VersionAttach('<%=UploadPath%>', 'true'); } function UploadComplete(File) { document.getElementById('<%=txtFile.ClientID%>').value = File.substring(File.lastIndexOf("/") + 1); document.getElementById('<%=hfFile.ClientID%>').value = File; alert('<%=txtFile.Text %>'); alert('<%=ClientID %>') } function ViewLink(File, Alert) { if (File != "") { if (File.indexOf("../data/") != -1) { window.open(File, '_blank'); } else { window.open('../data/<%=UploadPath%>/' + File, '_blank'); } } else if (Alert == "") { alert('No file has been uploaded for this field.'); } } </script>

    Read the article

  • The Red Gate and .NET Reflector Debacle

    - by Rick Strahl
    About a month ago Red Gate – the company who owns the NET Reflector tool most .NET devs use at one point or another – decided to change their business model for Reflector and take the product from free to a fully paid for license model. As a bit of history: .NET Reflector was originally created by Lutz Roeder as a free community tool to inspect .NET assemblies. Using Reflector you can examine the types in an assembly, drill into type signatures and quickly disassemble code to see how a particular method works.  In case you’ve been living under a rock and you’ve never looked at Reflector, here’s what it looks like drilled into an assembly from disk with some disassembled source code showing: Note that you get tons of information about each element in the tree, and almost all related types and members are clickable both in the list and source view so it’s extremely easy to navigate and follow the code flow even in this static assembly only view. For many year’s Lutz kept the the tool up to date and added more features gradually improving an already amazing tool and making it better. Then about two and a half years ago Red Gate bought the tool from Lutz. A lot of ruckus and noise ensued in the community back then about what would happen with the tool and… for the most part very little did. Other than the incessant update notices with prominent Red Gate promo on them life with Reflector went on. The product didn’t die and and it didn’t go commercial or to a charge model. When .NET 4.0 came out it still continued to work mostly because the .NET feature set doesn’t drastically change how types behave.  Then a month back Red Gate started making noise about a new Version Version 7 which would be commercial. No more free version - and a shit storm broke out in the community. Now normally I’m not one to be critical of companies trying to make money from a product, much less for a product that’s as incredibly useful as Reflector. There isn’t day in .NET development that goes by for me where I don’t fire up Reflector. Whether it’s for examining the innards of the .NET Framework, checking out third party code, or verifying some of my own code and resources. Even more so recently I’ve been doing a lot of Interop work with a non-.NET application that needs to access .NET components and Reflector has been immensely valuable to me (and my clients) if figuring out exact type signatures required to calling .NET components in assemblies. In short Reflector is an invaluable tool to me. Ok, so what’s the problem? Why all the fuss? Certainly the $39 Red Gate is trying to charge isn’t going to kill any developer. If there’s any tool in .NET that’s worth $39 it’s Reflector, right? Right, but that’s not the problem here. The problem is how Red Gate went about moving the product to commercial which borders on the downright bizarre. It’s almost as if somebody in management wrote a slogan: “How can we piss off the .NET community in the most painful way we can?” And that it seems Red Gate has a utterly succeeded. People are rabid, and for once I think that this outrage isn’t exactly misplaced. Take a look at the message thread that Red Gate dedicated from a link off the download page. Not only is Version 7 going to be a paid commercial tool, but the older versions of Reflector won’t be available any longer. Not only that but older versions that are already in use also will continually try to update themselves to the new paid version – which when installed will then expire unless registered properly. There have also been reports of Version 6 installs shutting themselves down and failing to work if the update is refused (I haven’t seen that myself so not sure if that’s true). In other words Red Gate is trying to make damn sure they’re getting your money if you attempt to use Reflector. There’s a lot of temptation there. Think about the millions of .NET developers out there and all of them possibly upgrading – that’s a nice chunk of change that Red Gate’s sitting on. Even with all the community backlash these guys are probably making some bank right now just because people need to get life to move on. Red Gate also put up a Feedback link on the download page – which not surprisingly is chock full with hate mail condemning the move. Oddly there’s not a single response to any of those messages by the Red Gate folks except when it concerns license questions for the full version. It puzzles me what that link serves for other yet than another complete example of failure to understand how to handle customer relations. There’s no doubt that that all of this has caused some serious outrage in the community. The sad part though is that this could have been handled so much less arrogantly and without pissing off the entire community and causing so much ill-will. People are pissed off and I have no doubt that this negative publicity will show up in the sales numbers for their other products. I certainly hope so. Stupidity ought to be painful! Why do Companies do boneheaded stuff like this? Red Gate’s original decision to buy Reflector was hotly debated but at that the time most of what would happen was mostly speculation. But I thought it was a smart move for any company that is in need of spreading its marketing message and corporate image as a vendor in the .NET space. Where else do you get to flash your corporate logo to hordes of .NET developers on a regular basis?  Exploiting that marketing with some goodwill of providing a free tool breeds positive feedback that hopefully has a good effect on the company’s visibility and the products it sells. Instead Red Gate seems to have taken exactly the opposite tack of corporate bullying to try to make a quick buck – and in the process ruined any community goodwill that might have come from providing a service community for free while still getting valuable marketing. What’s so puzzling about this boneheaded escapade is that the company doesn’t need to resort to underhanded tactics like what they are trying with Reflector 7. The tools the company makes are very good. I personally use SQL Compare, Sql Data Compare and ANTS Profiler on a regular basis and all of these tools are essential in my toolbox. They certainly work much better than the tools that are in the box with Visual Studio. Chances are that if Reflector 7 added useful features I would have been more than happy to shell out my $39 to upgrade when the time is right. It’s Expensive to give away stuff for Free At the same time, this episode shows some of the big problems that come with ‘free’ tools. A lot of organizations are realizing that giving stuff away for free is actually quite expensive and the pay back is often very intangible if any at all. Those that rely on donations or other voluntary compensation find that they amount contributed is absolutely miniscule as to not matter at all. Yet at the same time I bet most of those clamouring the loudest on that Red Gate Reflector feedback page that Reflector won’t be free anymore probably have NEVER made a donation to any open source project or free tool ever. The expectation of Free these days is just too great – which is a shame I think. There’s a lot to be said for paid software and having somebody to hold to responsible to because you gave them some money. There’s an incentive –> payback –> responsibility model that seems to be missing from free software (not all of it, but a lot of it). While there certainly are plenty of bad apples in paid software as well, money tends to be a good motivator for people to continue working and improving products. Reasons for giving away stuff are many but often it’s a naïve desire to share things when things are simple. At first it might be no problem to volunteer time and effort but as products mature the fun goes out of it, and as the reality of product maintenance kicks in developers want to get something back for the time and effort they’re putting in doing non-glamorous work. It’s then when products die or languish and this is painful for all to watch. For Red Gate however, I think there was always a pretty good payback from the Reflector acquisition in terms of marketing: Visibility and possible positioning of their products although they seemed to have mostly ignored that option. On the other hand they started this off pretty badly even 2 and a half years back when they aquired Reflector from Lutz with the same arrogant attitude that is evident in the latest episode. You really gotta wonder what folks are thinking in management – the sad part is from advance emails that were circulating, they were fully aware of the shit storm they were inciting with this and I suspect they are banking on the sheer numbers of .NET developers to still make them a tidy chunk of change from upgrades… Alternatives are coming For me personally the single license isn’t a problem, but I actually have a tool that I sell (an interop Web Service proxy generation tool) to customers and one of the things I recommend to use with has been Reflector to view assembly information and to find which Interop classes to instantiate from the non-.NET environment. It’s been nice to use Reflector for this with its small footprint and zero-configuration installation. But now with V7 becoming a paid tool that option is not going to be available anymore. Luckily it looks like the .NET community is jumping to it and trying to fill the void. Amidst the Red Gate outrage a new library called ILSpy has sprung up and providing at least some of the core functionality of Reflector with an open source library. It looks promising going forward and I suspect there will be a lot more support and interest to support this project now that Reflector has gone over to the ‘dark side’…© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • Change Data Capture

    - by Ricardo Peres
    There's an hidden gem in SQL Server 2008: Change Data Capture (CDC). Using CDC we get full audit capabilities with absolutely no implementation code: we can see all changes made to a specific table, including the old and new values! You can only use CDC in SQL Server 2008 Standard or Enterprise, Express edition is not supported. Here are the steps you need to take, just remember SQL Agent must be running: use SomeDatabase; -- first create a table CREATE TABLE Author ( ID INT NOT NULL PRIMARY KEY IDENTITY(1, 1), Name NVARCHAR(20) NOT NULL, EMail NVARCHAR(50) NOT NULL, Birthday DATE NOT NULL ) -- enable CDC at the DB level EXEC sys.sp_cdc_enable_db -- check CDC is enabled for the current DB SELECT name, is_cdc_enabled FROM sys.databases WHERE name = 'SomeDatabase' -- enable CDC for table Author, all columns exec sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'Author', @role_name = null -- insert values into table Author insert into Author (Name, EMail, Birthday, Username) values ('Bla', 'bla@bla', 1990-10-10, 'bla') -- check CDC data for table Author -- __$operation: 1 = DELETE, 2 = INSERT, 3 = BEFORE UPDATE 4 = AFTER UPDATE -- __$start_lsn: operation timestamp select * from cdc.dbo_author_CT -- update table Author update Author set EMail = '[email protected]' where Name = 'Bla' -- check CDC data for table Author select * from cdc.dbo_author_CT -- delete from table Author delete from Author -- check CDC data for table Author select * from cdc.dbo_author_CT -- disable CDC for table Author -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_table @source_schema = 'dbo', @source_name = 'Author', @capture_instance = 'dbo_Author' -- disable CDC for the entire DB -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_db SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.all();

    Read the article

  • From DBA to Data Analyst

    - by Denise McInerney
    Cross posted from the PASS Blog There is a lot changing in the data professional’s world these days. More data is being produced and stored. More enterprises are trying to use that data to improve their products and services and understand their customers better. More data platforms and tools seem to be crowding the market. For a traditional DBA this can be a confusing and perhaps unsettling time. It’s also a time that offers great opportunity for career growth. I speak from personal experience. We sometimes refer to the “accidental DBA”, the person who finds herself suddenly responsible for managing the database because she has some other technical skills. While it was not accidental, six months ago I was unexpectedly offered a chance to transition out of my DBA role and become a data analyst. I have since come to view this offer as a gift, though at the time I wasn’t quite sure what to do with it. Throughout my DBA career I’ve gotten support from my PASS friends and colleagues and they were the first ones I turned to for counsel about this new situation. Everyone was encouraging and I received two pieces of valuable advice: first, leverage what I already know about data and second, work to understand the business’ needs. Bringing the power of data to bear to solve business problems is really the heart of the job. The challenge is figuring out how to do that. PASS had been the source of much of my technical training as a DBA, so I naturally started there to begin my Business Intelligence education. Once again the Virtual Chapter webinars, local chapter meetings and SQL Saturdays have been invaluable. I work in a large company where we are fortunate to have some very talented data scientists and analysts. These colleagues have been generous with their time and advice. I also took a statistics class through Coursera where I got a refresher in statistics and an introduction to the R programming language. And that’s not the end of the free resources available to someone wanting to acquire new skills. There are many knowledgeable Business Intelligence and Analytics professionals who teach through their blogs. Every day I can learn something new from one of these experts. Sometimes we plan our next career move and sometimes it just happens. Either way a database professional who follows industry developments and acquires new skills will be better prepared when change comes. Take the opportunity to learn something about the changing data landscape and attend a Business Intelligence, Business Analytics or Big Data Virtual Chapter meeting. And if you are moving into this new world of data consider attending the PASS Business Analytics Conference in April where you can meet and learn from those who are already on that road. It’s been said that “the only thing constant is change.” That’s never been more true for the data professional than it is today. But if you are someone who loves data and grasps its potential you are in the right place at the right time.

    Read the article

  • Blink-Data vs Instinct?

    - by Samantha.Y. Ma
    In his landmark bestseller Blink, well-known author and journalist Malcolm Gladwell explores how human beings everyday make seemingly instantaneous choices --in the blink of an eye--and how we “think without thinking.”  These situations actually aren’t as simple as they seem, he postulates; and throughout the book, Gladwell seeks answers to questions such as: 1.    What makes some people good at thinking on their feet and making quick spontaneous decisions?2.    Why do some people follow their instincts and win, while others consistently seem to stumble into error?3.    Why are some of the best decisions often those that are difficult to explain to others?In Blink, Gladwell introduces us to the psychologist who has learned to predict whether a marriage will last, based on a few minutes of observing a couple; the tennis coach who knows when a player will double-fault before the racket even makes contact with the ball; the antiquities experts who recognize a fake at a glance. Ultimately, Blink reveals that great decision makers aren't those who spend the most time deliberating or analyzing information, but those who focus on key factors among an overwhelming number of variables-- i.e., those who have perfected the art of "thin-slicing.” In Data vs. Instinct: Perfecting Global Sales Performance, a new report sponsored by Oracle, the Economist Intelligence Unit (EIU) explores the roles data and instinct play in decision-making by sales managers and discusses how sales executives can increase sales performance through more effective  territory planning and incentive/compensation strategies.If you are a sales executive, ask yourself this:  “Do you rely on knowledge (data) when you plan out your sales strategy?  If you rely on data, how do you ensure that your data sources are reliable, up-to-date, and complete?  With the emergence of social media and the proliferation of both structured and unstructured data, how do you know that you are applying your information/data correctly and in-context?  Three key findings in the report are:•    Six out of ten executives say they rely more on data than instinct to drive decisions. •    Nearly one half (48 percent) of incentive compensation plans do not achieve the desired results. •    Senior sales executives rely more on current and historical data than on forecast data. Strikingly similar to what Gladwell concludes in Blink, the report’s authors succinctly sum up their findings: "The best outcome is a combination of timely information, insightful predictions, and support data."Applying this insight is crucial to creating a sound sales plan that drives alignment and results.  In the area of sales performance management, “territory programs and incentive compensation continue to present particularly complex challenges in an increasingly globalized market," say the report’s authors. "It behooves companies to get a better handle on translating that data into actionable and effective plans." To help solve this challenge, CRM Oracle Fusion integrates forecasting, quotas, compensation, and territories into a single system.   For example, Oracle Fusion CRM provides a natural integration between territories, which define the sales targets (e.g., collection of accounts) for the sales force, and quotas, which quantify the sales targets. In fact, territory hierarchy is a core analytic dimension to slice and dice sales results, using sales analytics and alerts to help you identify where problems are occurring. This makes territoriesStart tapping into both data and instinct effectively today with Oracle Fusion CRM.   Here is a short video to provide you with a snapshot of how it can help you optimize your sales performance.  

    Read the article

  • Reading OpenDocument spreadsheets using C#

    - by DigiMortal
    Excel with its file formats is not the only spreadsheet application that is widely used. There are also users on Linux and Macs and often they are using OpenOffice and other open-source office packages that use ODF instead of OpenXML. In this post I will show you how to read Open Document spreadsheet in C#. Importer as example My previous post about importers showed you how to build flexible importers support to your web application. This post introduces you practical example of one of my importers. Of course, sensitive code is omitted. We start with ODS importer class and we add new methods as we go. public class OdsImporter : ImporterBase {     public OdsImporter()     {     }       public override string[] SupportedFileExtensions     {         get { return new[] { "ods" }; }     }       public override ImportResult Import(Stream fileStream, long companyId, short year)     {         string contentXml = GetContentXml(fileStream);           var result = new ImportResult();         var doc = XDocument.Parse(contentXml);           var rows = doc.Descendants("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row").Skip(1);           foreach (var row in rows)         {             ImportRow(row, companyId, year, result);         }           return result;     } } The class given here just extends base class for importers (previous post uses interface but as I already told there you move to abstract base class when writing code for real projects). Import method reads data from *.ods file, parses it (it is XML), finds all data rows and imports data. As you may see then first row is skipped. This is because the first row on my sheet is always headers row. Reading ODS file Our import method starts with getting XML from *.ods file. ODS files like OpenXml files are zipped containers that contain different files. We need content.xml as all data is kept there. To get the contents of file we use SharpZipLib library to read uploaded file as *.zip file. private static string GetContentXml(Stream fileStream) {     var contentXml = "";       using (var zipInputStream = new ZipInputStream(fileStream))     {         ZipEntry contentEntry = null;         while ((contentEntry = zipInputStream.GetNextEntry()) != null)         {             if (!contentEntry.IsFile)                 continue;             if (contentEntry.Name.ToLower() == "content.xml")                 break;         }           if (contentEntry.Name.ToLower() != "content.xml")         {             throw new Exception("Cannot find content.xml");         }           var bytesResult = new byte[] { };         var bytes = new byte[2000];         var i = 0;           while ((i = zipInputStream.Read(bytes, 0, bytes.Length)) != 0)         {             var arrayLength = bytesResult.Length;             Array.Resize<byte>(ref bytesResult, arrayLength + i);             Array.Copy(bytes, 0, bytesResult, arrayLength, i);         }         contentXml = Encoding.UTF8.GetString(bytesResult);     }     return contentXml; } If here is content.xml file then we stop browsing the file. We read this file to memory and return it as UTF-8 format string. Importing rows Our last task is to import rows. We use special method for this as we have to handle some tricks here. To keep files smaller the cell count on row is not always the same. If we have more than one empty cell one after another then ODS keeps only one cell for sequential empty cells. This cell has attribute called number-columns-repeated and it’s value is set to the number of sequential empty cells. This is why we use two indexers for cells collection. private void ImportRow(XElement row, ImportResult result) {     var cells = (from c in row.Descendants()                 where c.Name == "{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-cell"                 select c).ToList();       var dto = new DataDto();       var count = cells.Count;     var j = -1;       for (var i = 0; i < count; i++)     {         j++;         var cell = cells[i];         var attr = cell.Attribute("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}number-columns-repeated");         if (attr != null)         {             var numToSkip = 0;             if (int.TryParse(attr.Value, out numToSkip))             {                 j += numToSkip - 1;             }         }           if (i > 30) break;         if (j == 0)         {             dto.SomeProperty = cells[i].Value;         }         if (j == 1)         {             dto.SomeOtherProperty = cells[i].Value;         }         // some more data reading     }       // save data } You can define your own class for import results and add there all problems found during data import. Your application gets the results and shows them to user. Conclusion Reading ODS files may seem to complex task but actually it is very easy if we need only data from those documents. We can use some zip-library to get the content file and then parse it to XML. It is not hard to go through the XML but there are some optimization tricks we have to know. The code here is safe to use in web applications as it is not using any API-s that may have special needs to server and infrastructure.

    Read the article

  • Reaching to the Holy Grail of Data Management

    - by Irem Radzik
    Pervasive, continuous access to trusted data. That’s the ultimate goal of data management. It enables to leverage data as an asset to create value for customers and the organization. It creates the strong foundation needed to move the business forward. How you get there is also critical. As with all IT initiatives using high performance solutions with low cost of ownership is another key requirement in today’s IT world. Oracle's  data integration product strategy focuses on helping customers achieve this ultimate goal with high performance and low TCO.  At OpenWorld, we will be showing how Oracle Data Integration products help you reach your data management goals, considering new trends in information management, such as big data and cloud computing. We will also provide an update on the latest product releases, such as Oracle GoldenGate 11gR2. If you will be at OpenWorld, please join us on Monday Oct 1st 10:45am at Moscone West – 3005 to hear our VP of Product Development, Brad Adelberg, present "Future Strategy, Direction, and Roadmap of Oracle’s Data Integration Platform". The Data Integration track at OpenWorld covers variety of topics and speakers. In addition to product management of Oracle GoldenGate, Oracle Data Integrator, and Enteprise Data Quality presenting product updates and roadmap, we have several customer panels and stand-alone sessions featuring select customers such as St. Jude Medical, Raymond James, Aderas, Turkcell, Paychex, Comcast,  Ticketmaster, Bank of America and more. You can see an overview of Data Integration sessions here. If you are not able to attend OpenWorld, please check out our latest resources for Data Integration and Oracle GoldenGate. In the coming weeks you will see more blogs about our products’ new capabilities and what to expect at OpenWorld. I hope to see you at OpenWorld and stay in touch via our future blogs. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • HTML to RTF Converter for .NET

    - by nickyt
    I've already seen lots of posts on the site for RTF to HTML and some other posts talking about some HTML to RTF converters, but I'm really trying to get a full breakdown of what is considered the most widely used commercial product, open source product or if people recommend going home grown. Apologies if you consider this a duplicate question, but I'm trying to create a product matrix to see what is the most viable for our application. I also think this would be helpful for others. The converter would be used in an ASP.NET 2.0 application (we're upgrading to 3.5 shortly but still sticking with WebForms) using SQLServer 2005 (soon 2008) as the DB. From reading a few posts, SautinSoft appears to be popular as a commercial component. Are there other commercial components that you'd recommend for converting HTML to RTF? Price does matter, but even if it's a little on the expensive side, please list it. For open source, I read that OpenOffice.org can be run as a service so that it can convert files. However, this appears to be only Java based. I imagine, I'd need some kind of interop to use this? What .NET open source components, if any, are out there for converting HTML to RTF? For home grown, is an XSLT the way to go with XHTML? If so, what component do you recommend for generating XHTML? Otherwise, what other home grown avenuses do you recommend. Also, please note that I currently don't care so much about RTF to HTML. If a commercial component offers this and the price is still the same, fine, otherwise please don't mention it.

    Read the article

  • Spring.Data.NHibernate12:::Application not closing database connection(Getting max connection pool

    - by anupam3m
    Even after successful transaction.Application connection with the database persist.in Nhibernate log it shows Nhibernate Log 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - executing flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] < (null) - registering flush begin 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] < (null) - registering flush end 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - post flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - before transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] < (null) - aggressively releasing database connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Connection.ConnectionProvider [(null)] <(null) - Closing connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Transaction.AdoTransaction [(null)] < (null) - running AdoTransaction.Dispose() 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - closing session 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.BatcherImpl [(null)] <(null) - running BatcherImpl.Dispose(true) Underneath given is my dataconfiguration file < ?xml version="1.0" encoding="utf-8" ? < objects xmlns="http://www.springframework.net" xmlns:db="http://www.springframework.net/database" xmlns:tx="http://www.springframework.net/tx"> <property name="CacheSettings" ref="CacheSettings"/> type="Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities.UpdateEntityCacheHelper, Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities" singleton="false"/ < object type="Spring.Objects.Factory.Config.PropertyPlaceholderConfigurer, Spring.Core" < property name="ConfigSections" value="databaseSettings"/ < db:provider id="AMACDbProvider" provider="OracleClient-2.0" connectionString="Data Source=RISCODEVDB;User ID=amacdevuser; Password=amacuser1234;"/> < object id="NHibernateSessionFactory" type="Spring.Data.NHibernate.LocalSessionFactoryObject,Spring.Data.NHibernate12" < property name="DbProvider" ref="AMACDbProvider"/ <value> Risco.Rsp.Ac.AMAC.CacheMappings</value> </property> <dictionary> < entry key="hibernate.connection.provider" value="NHibernate.Connection.DriverConnectionProvider" /> <entry key="hibernate.dialect" value="NHibernate.Dialect.Oracle9Dialect"/ value="NHibernate.Driver.OracleClientDriver"/ singleton="false" <property name="SessionFactory" ref="NHibernateSessionFactory" /> <property name="TemplateFlushMode" value="Auto" /> <property name="CacheQueries" value="true" /> <property name="EntityInterceptor" ref="AuditLogger"/> type="Spring.Data.NHibernate.HibernateTransactionManager, >Spring.Data.NHibernate12"> <property name="DbProvider" ref="AMACDbProvider"/> <property name="SessionFactory" ref="NHibernateSessionFactory"/> <property name="EntityInterceptor" ref="AuditLogger"/> type="Spring.Transaction.Interceptor.TransactionProxyFactoryObject,Spring.Data" <property name="PlatformTransactionManager" ref="transactionManager"/> <property name="Target" ref="EventPubSubDAO"/> <property name="TransactionAttributes"> <name-values> <add key="Save*" value="PROPAGATION_REQUIRES_NEW"/> <add key="Delete*" value="PROPAGATION_REQUIRED"/> </name-values> </property> type="Risco.Rsp.Ac.AMAC.DAO.EventPubSubMgmt.EventPubSubDAO, Risco.Rsp.Ac.AMAC.DAO.EventPubSubMgmt" < /object < tx:attribute-driven/ < /objects Please help me out with this issue.Thanks

    Read the article

  • User controls Stopped working after Migration from 3.7 to 5.2

    - by user1400290
    I recently Migrated my 3.7 sp4 project to 5.2, but I had issues while doing so. Currently, my user controls are not working after migration in 5.2 project. Below is the code: User Control Code: <%@ Control Language="C#" AutoEventWireup="true" CodeFile="SiteMenu.ascx.cs" Inherits="UserControls_Nav_SiteMenu" %> <%@ Register TagPrefix="telerik" Assembly="Telerik.Web.UI" Namespace="Telerik.Web.UI" %> <asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" ShowStartingNode="false" /> <telerik:RadMenu ID="RadMenu1" runat="server" DataSourceID="SitemapDataSource1" OnItemDataBound="RadMenu1_ItemDataBound"> </telerik:RadMenu> User Control's Class code: using System; using System.Data; using System.Configuration; using System.Collections; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.ComponentModel; using System.Drawing; using Telerik; using Telerik.Cms; using Telerik.Cms.Web; using Telerik.Web.UI; using Telerik.Caching; using Telerik.Cms.Web.UI; [DefaultProperty("StartingNodeOffset")] public partial class UserControls_Nav_SiteMenu : System.Web.UI.UserControl, ICacheableObject { protected void Page_Load(object sender, EventArgs e) { } protected override void Render(HtmlTextWriter writer) { // Checks if this is called by the Search Indexer and does not render anything if so. // Navigation controls are present in every page and should NOT be indexed multiple times. if (!CmsContext.IsRequestCrawler(this.Context)) base.Render(writer); } #region Data Fields private bool hideUrlForGroupPages = false; private string selectedItemCssClass = "selectedItem"; #endregion #region Properties [Browsable(true)] [Category("Behavior")] public int LastExpandLevel { get { if (this.RadMenu1.MaxDataBindDepth < 0) return 0; return this.RadMenu1.MaxDataBindDepth; } set { if (value == 0) this.RadMenu1.MaxDataBindDepth = -1; else this.RadMenu1.MaxDataBindDepth = value; } } [Browsable(true)] [Category("Behavior")] public int ExpandDelay { get { return this.RadMenu1.ExpandDelay; } set { this.RadMenu1.ExpandDelay = value; } } [Browsable(true)] [Category("Behavior")] public bool ClickToOpen { get { return this.RadMenu1.ClickToOpen; } set { this.RadMenu1.ClickToOpen = value; } } [Browsable(true)] [Category("Behavior")] [DefaultValue(false)] public bool HideUrlForGroupPages { get { return this.hideUrlForGroupPages; } set { this.hideUrlForGroupPages = value; } } [Browsable(true)] [Category("Appearance")] public string SelectedItemCssClass { get { return this.selectedItemCssClass; } set { this.selectedItemCssClass = value; } } [Browsable(true)] [Category("Appearance")] public string CssClass { get { return this.RadMenu1.CssClass; } set { this.RadMenu1.CssClass = value; } } [Browsable(true)] public RadMenu Menu { get { return this.RadMenu1; } set { this.RadMenu1 = value; } } [Browsable(true)] [Category("Navigation")] public int StartingNodeOffset { get { return this.SiteMapDataSource1.StartingNodeOffset; } set { this.SiteMapDataSource1.StartingNodeOffset = value; } } [WebEditor("Telerik.Cms.Web.UI.UrlEditorWrapper, Telerik.Cms")] [Browsable(true)] [Category("Navigation")] public string StartingNodeUrl { get { return this.SiteMapDataSource1.StartingNodeUrl; } set { this.SiteMapDataSource1.StartingNodeUrl = value; } } [Browsable(true)] [Category("Navigation")] public bool StartFromCurrentNode { get { return this.SiteMapDataSource1.StartFromCurrentNode; } set { this.SiteMapDataSource1.StartFromCurrentNode = value; } } [Browsable(true)] [Category("Navigation")] public bool ShowStartingNode { get { return this.SiteMapDataSource1.ShowStartingNode; } set { this.SiteMapDataSource1.ShowStartingNode = value; } } /// <summary>(Exposed from contained RadMenu.)</summary> [Browsable(true)] [Category("Appearance")] public string SkinID { get { return this.RadMenu1.SkinID; } set { this.RadMenu1.SkinID = value; } } [Browsable(true)] [Category("Appearance")] public string Skin { get { return this.RadMenu1.Skin; } set { this.RadMenu1.Skin = value; } } #endregion #region Methods public void RadMenu1_ItemDataBound(object sender, RadMenuEventArgs e) { CmsSiteMapNode node = e.Item.DataItem as CmsSiteMapNode; if (this.hideUrlForGroupPages) { if (node != null) { // save the PageID in the attributes of the menu item e.Item.Attributes.Add("PageID", node.Key); if (node.PageType == CmsPageType.Group) { e.Item.NavigateUrl = ""; } } } if (node.CmsPage != null) { if (node.CmsPage.PageType == CmsPageType.External) { e.Item.Target = "_blank"; } } } #endregion #region ICacheableObject Members public System.Web.Caching.CacheDependency[] GetDependencies() { CmsSiteMapProvider provider = null; if (!String.IsNullOrEmpty(this.SiteMapDataSource1.SiteMapProvider)) provider = SiteMap.Providers[this.SiteMapDataSource1.SiteMapProvider] as CmsSiteMapProvider; else provider = SiteMap.Provider as CmsSiteMapProvider; if (provider != null) { return new System.Web.Caching.CacheDependency[]{ provider.CloneCacheDependency()}; } return null; } #endregion } When I edit the Template(in Admin mode), the following error is displayed in control location: Both DataSource and DataSourceID are defined on 'RadMenu1'. Remove one definition. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Both DataSource and DataSourceID are defined on 'RadMenu1'. Remove one definition. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Both DataSource and DataSourceID are defined on 'RadMenu1'. Remove one definition.] System.Web.UI.WebControls.DataBoundControl.ConnectToDataSourceView() +3234866 System.Web.UI.WebControls.DataBoundControl.OnLoad(EventArgs e) +28 System.Web.UI.Control.LoadRecursive() +71 System.Web.UI.Control.LoadRecursive() +190 System.Web.UI.Control.LoadRecursive() +190 System.Web.UI.Control.AddedControl(Control control, Int32 index) +11422584 System.Web.UI.Control.EnsureChildControls() +182 System.Web.UI.Control.PreRenderRecursiveInternal() +60 System.Web.UI.Control.PreRenderRecursiveInternal() +222 System.Web.UI.Control.PreRenderRecursiveInternal() +222 System.Web.UI.Control.PreRenderRecursiveInternal() +222 System.Web.UI.Control.PreRenderRecursiveInternal() +222 System.Web.UI.Control.PreRenderRecursiveInternal() +222 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4201 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.272 but I searched in my code as you can see above there's only DataSourceID is defined. What should I do? Thanks

    Read the article

  • Anything like Heroku for PHP or .NET?

    - by Wayne M
    In my area PHP is very widespread, so is .NET. Ruby not so much; most places have never heard of it. For some personal things I am "forced" to choose Rails because I want to take advantage of Heroku - the ability to deploy and scale on the cloud very easily is the main reason. Also, they offer a small FREE plan that I can use for demo sites or, in this case, for my business' static page; as a totally bootstrapped startup I have maybe $50 or so in initial capital and cannot afford to pay monthly fees while I'm getting started. Are there any similar offerings for other languages? Specifically, I really like the small, 5MB site for free that Heroku offers - is there anything like that for PHP and/or .NET? I'm not even that concerned about the "cloud" part, but that would be a nice bonus. If there is, I might be able to kill two birds with one stone and pick up a useful skill as I'm doing my own thing instead of using something that nobody else knows or cares about. I should add I'm specifically interested in something that offers a free plan. As I said, Heroku has a 5mb plan that you can have as many as you want for free; I have yet to find anything similar for any other platform, and to be honest I'm not too thrilled about using Ruby on Rails for everything simply to take advantage of this.

    Read the article

  • Code producing System.NullReferenceException error for Membership.GetUser(). This is VB.Net (ASP.Net 4)

    - by Derrek
    I have a Default.aspx page that is not static. I have added functionality with datalist and sqldatasources. When a user logins he/she will see items like saved workouts, saved equipment, total replys, etc... This is based on getting the currently logged in user UserID. Quite simply this works great when the user is logged in. However, I do not want to force a user to login to view the Default page because it does have functionality on it that does not require login. When a user is not logged in of course I receive the [System.NullReferenceException] error. I understand the error well but I do not know how to code to fix it. That is where I need help. I will admit I am more designer than developer. However, I do know the exception error I am receivving is caused by me not setting a value in my code when a user is not logged in. I do not know how to do that and have for a week made unsuccessful attempts at writing the code. Both sets of code below compile for VB.Net/ASP.Net 4/Visual Studio 2010 without errors. However, I still get the System.NullReferenceException error if not logged in. I know it can be done but I do not know the right syntax. If you can help please insert you code in mine or write it out. JUST TELLING ME WHERE TO GO TO FIND AN ANSWER WON'T HELP. I HAVE DONE THAT FOR 7 STRAIGHT DAYS. I APPRECIATE OUR HELP. Partial Class _Default Inherits System.Web.UI.Page Protected Sub SqlDataSource4_Selecting(ByVal sender As Object, ByVal e As SqlDataSourceCommandEventArgs) Handles SqlDataSource4.Selecting Dim MemUser As MembershipUser MemUser = Membership.GetUser() If Not MemUser Is DBNull.Value Then UserID.Text = MemUser.ProviderUserKey.ToString() e.Command.Parameters("@UserId").Value = MemUser.ProviderUserKey.ToString() End If End Sub -------------------------------------ORIGINAL CODE------------------------------- Partial Class _Default Inherits System.Web.UI.Page Protected Sub SqlDataSource4_Selecting(ByVal sender As Object, ByVal e As SqlDataSourceCommandEventArgs) Handles SqlDataSource4.Selecting Dim MemUser As MembershipUser MemUser = Membership.GetUser() UserID.Text = MemUser.ProviderUserKey.ToString() e.Command.Parameters("@UserId").Value = MemUser.ProviderUserKey.ToString() End Sub

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Core Data migration failing with error: Failed to save new store after first pass of migration

    - by unforgiven
    In the past I had already implemented successfully automatic migration from version 1 of my data model to version 2. Now, using SDK 3.1.3, migrating from version 2 to version 3 fails with the following error: Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 UserInfo=0x5363360 "Operation could not be completed. (Cocoa error 134110.)", { NSUnderlyingError = Error Domain=NSCocoaErrorDomain Code=256 UserInfo=0x53622b0 "Operation could not be completed. (Cocoa error 256.)"; reason = "Failed to save new store after first pass of migration."; } I have tried automatic migration using NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption and also migration using only NSMigratePersistentStoresAutomaticallyOption, providing a mapping model from v2 to v3. I see the above error logged, and no object is available in the application. However, if I quit the application and reopen it, everything is in place and working. The Core Data methods I am using are the following ones - (NSManagedObjectModel *)managedObjectModel { if (managedObjectModel != nil) { return managedObjectModel; } NSString *path = [[NSBundle mainBundle] pathForResource:@"MYAPP" ofType:@"momd"]; NSURL *momURL = [NSURL fileURLWithPath:path]; managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL]; return managedObjectModel; } - (NSManagedObjectContext *) managedObjectContext { if (managedObjectContext != nil) { return managedObjectContext; } NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator]; if (coordinator != nil) { managedObjectContext = [[NSManagedObjectContext alloc] init]; [managedObjectContext setPersistentStoreCoordinator: coordinator]; } return managedObjectContext; } - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"MYAPP.sqlite"]]; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: [self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { // Handle error NSLog(@"Unresolved error %@, %@", error, [error userInfo]); } return persistentStoreCoordinator; } In the simulator, I see that this generates a MYAPP~.sqlite files and a MYAPP.sqlite file. I tried to remove the MYAPP~.sqlite file, but BOOL oldExists = [[NSFileManager defaultManager] fileExistsAtPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"MYAPP~.sqlite"]]; always returns NO. Any clue? Am I doing something wrong? Thank you in advance.

    Read the article

  • Abcpdf throwing System.ExecutionEngineException

    - by Tom Tresansky
    I have the binary for several pdf files stored in a collection of Byte arrays. My goal is to concatenate them into a single .pdf file using abcpdf, then stream that newly created file to the Response object on a page of an ASP.Net website. Had been doing it like this: BEGIN LOOP ... 'Create a new Doc Dim doc As Doc = New Doc 'Read the binary of the current PDF doc.Read(bytes) 'Append to the master merged PDF doc _mergedPDFDoc.Append(Doc) END LOOP Which was working fine 95% of the time. Every now and then however, creating a new Doc object would throw a System.ExecutionEngineException and crash the CLR. It didn't seem to be related to a large number of pdfs (sometimes would happen w/ only 2), or with large sized pdfs. It seemed almost completely random. This is a known bug in abcpdf described (not very well) here Item 6.24. I came across a helpful SO post which suggested using a Using block for the abcpdf Doc object. So now I'm doing this: Using doc As New Doc 'Read the binary of the current PDF doc.Read(bytes) 'Append to the master merged PDF doc _mergedPDFDoc.Append(doc) End Using And I haven't seen the problem occur again yet, and have been pounding on a test version as best as I can to get it to. Has anyone had any similar experience with this error? Did this fix it?

    Read the article

  • Dynamic connection for LINQ to SQL DataContext

    - by Steve Clements
    If for some reason you need to specify a specific connection string for a DataContext, you can of course pass the connection string when you initialise you DataContext object.  A common scenario could be a dev/test/stage/live connection string, but in my case its for either a live or archive database.   I however want the connection string to be handled by the DataContext, there are probably lots of different reasons someone would want to do this…but here are mine. I want the same connection string for all instances of DataContext, but I don’t know what it is yet! I prefer the clean code and ease of not using a constructor parameter. The refactoring of using a constructor parameter could be a nightmare.   So my approach is to create a new partial class for the DataContext and handle empty constructor in there. First from within the LINQ to SQL designer I changed the connection property to None.  This will remove the empty constructor code from the auto generated designer.cs file. Right click on the .dbml file, click View Code and a file and class is created for you! You’ll see the new class created in solutions explorer and the file will open. We are going to be playing with constructors so you need to add the inheritance from System.Data.Linq.DataContext public partial class DataClasses1DataContext : System.Data.Linq.DataContext    {    }   Add the empty constructor and I have added a property that will get my connection string, you will have whatever logic you need to decide and get the connection string you require.  In my case I will be hitting a database, but I have omitted that code. public partial class DataClasses1DataContext : System.Data.Linq.DataContext {    // Connection String Keys - stored in web.config    static string LiveConnectionStringKey = "LiveConnectionString";    static string ArchiveConnectionStringKey = "ArchiveConnectionString";      protected static string ConnectionString    {       get       {          if (DoIWantToUseTheLiveConnection) {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[LiveConnectionStringKey].ConnectionString;          }          else {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[ArchiveConnectionStringKey].ConnectionString;          }       }    }      public DataClasses1DataContext() :       base(ConnectionString, mappingSource)    {       OnCreated();    } }   Now when I new up my DataContext, I can just leave the constructor empty and my partial class will decide which one i need to use. Nice, clean code that can be easily refractored and tested.   Share this post :

    Read the article

  • How to use "SelectMany" with DataServiceQuery<>

    - by sako73
    I have the following DataServiceQuery running agaist an ADO Data Service (with the update installed to make it run like .net 4): DataServiceQuery<Account> q = (_gsc.Users .Where(c => c.UserId == myId) .SelectMany(c => c.ConsumerXref) .Select(x => x.Account) .Where(a => a.AccountName == "My Account" && a.IsActive) .Select(a => a)) as DataServiceQuery<Account>; When I run it, I get an exception: Cannot specify query options (orderby, where, take, skip) on single resource As far as I can tell, I need to use a version of "SelectMany" that includes an additonal lambda expression (http://msdn.microsoft.com/en-us/library/bb549040.aspx), but I am not able to get this to work correctly. Could someone show me how to properly structure the "SelectMany" call? Thank you for any help.

    Read the article

  • Create and Consume WCF service using Visual Studio 2010

    - by sreejukg
    In this article I am going to demonstrate how to create a WCF service, that can be hosted inside IIS and a windows application that consume the WCF service. To support service oriented architecture, Microsoft developed the programming model named Windows Communication Foundation (WCF). ASMX was the prior version from Microsoft, was completely based on XML and .Net framework continues to support ASMX web services in future versions also. While ASMX web services was the first step towards the service oriented architecture, Microsoft has made a big step forward by introducing WCF. An overview of planning for WCF can be found from this link http://msdn.microsoft.com/en-us/library/ff649584.aspx . The following are the important differences between WCF and ASMX from an asp.net developer point of view. 1. ASMX web services are easy to write, configure and consume 2. ASMX web services are only hosted in IIS 3. ASMX web services can only use http 4. WCF, can be hosted inside IIS, windows service, console application, WAS(Windows Process Activation Service) etc 5. WCF can be used with HTTP, TCP/IP, MSMQ and other protocols. The detailed difference between ASMX web service and WCF can be found here. http://msdn.microsoft.com/en-us/library/cc304771.aspx Though WCF is a bigger step for future, Visual Studio makes it simpler to create, publish and consume the WCF service. In this demonstration, I am going to create a service named SayHello that accepts 2 parameters such as name and language code. The service will return a hello to user name that corresponds to the language. So the proposed service usage is as follows. Caller: SayHello(“Sreeju”, “en”) -> return value -> Hello Sreeju Caller: SayHello(“???”, “ar”) -> return value -> ????? ??? Caller: SayHello(“Sreeju”, “es”) - > return value -> Hola Sreeju Note: calling an automated translation service is not the intention of this article. If you are interested, you can find bing translator API and can use in your application. http://www.microsofttranslator.com/dev/ So Let us start First I am going to create a Service Application that offer the SayHello Service. Open Visual Studio 2010, Go to File -> New Project, from your preferred language from the templates section select WCF, select WCF service application as the project type, give the project a name(I named it as HelloService), click ok so that visual studio will create the project for you. In this demonstration, I have used C# as the programming language. Visual studio will create the necessary files for you to start with. By default it will create a service with name Service1.svc and there will be an interface named IService.cs. The screenshot for the project in solution explorer is as follows Since I want to demonstrate how to create new service, I deleted Service1.Svc and IService1.cs files from the project by right click the file and select delete. Now in the project there is no service available, I am going to create one. From the solution explorer, right click the project, select Add -> New Item Add new item dialog will appear to you. Select WCF service from the list, give the name as HelloService.svc, and click on the Add button. Now Visual studio will create 2 files with name IHelloService.cs and HelloService.svc. These files are basically the service definition (IHelloService.cs) and the service implementation (HelloService.svc). Let us examine the IHelloService interface. The code state that IHelloService is the service definition and it provides an operation/method (similar to web method in ASMX web services) named DoWork(). Any WCF service will have a definition file as an Interface that defines the service. Let us see what is inside HelloService.svc The code illustrated is implementing the interface IHelloService. The code is self-explanatory; the HelloService class needs to implement all the methods defined in the Service Definition. Let me do the service as I require. Open IHelloService.cs in visual studio, and delete the DoWork() method and add a definition for SayHello(), do not forget to add OperationContract attribute to the method. The modified IHelloService.cs will look as follows Now implement the SayHello method in the HelloService.svc.cs file. Here I wrote the code for SayHello method as follows. I am done with the service. Now you can build and run the service by clicking f5 (or selecting start debugging from the debug menu). Visual studio will host the service in give you a client to test it. The screenshot is as follows. In the left pane, it shows the services available in the server and in right side you can invoke the service. To test the service sayHello, double click on it from the above window. It will ask you to enter the parameters and click on the invoke button. See a sample output below. Now I have done with the service. The next step is to write a service client. Creating a consumer application involves 2 steps. One generating the class and configuration file corresponds to the service. Create a project that utilizes the generated class and configuration file. First I am going to generate the class and configuration file. There is a great tool available with Visual Studio named svcutil.exe, this tool will create the necessary class and configuration files for you. Read the documentation for the svcutil.exe here http://msdn.microsoft.com/en-us/library/aa347733.aspx . Open Visual studio command prompt, you can find it under Start Menu -> All Programs -> Visual Studio 2010 -> Visual Studio Tools -> Visual Studio command prompt Make sure the service is in running state in visual studio. Note the url for the service(from the running window, you can right click and choose copy address). Now from the command prompt, enter the svcutil.exe command as follows. I have mentioned the url and the /d switch – for the directory to store the output files(In this case d:\temp). If you are using windows drive(in my case it is c: ) , make sure you open the command prompt with run as administrator option, otherwise you will get permission error(Only in windows 7 or windows vista). The tool has created 2 files, HelloService.cs and output.config. Now the next step is to create a new project and use the created files and consume the service. Let us do that now. I am going to add a console application to the current solution. Right click solution name in the solution explorer, right click, Add-> New Project Under Visual C#, select console application, give the project a name, I named it TestService Now navigate to d:\temp where I generated the files with the svcutil.exe. Rename output.config to app.config. Next step is to add both files (d:\temp\helloservice.cs and app.config) to the files. In the solution explorer, right click the project, Add -> Add existing item, browse to the d:\temp folder, select the 2 files as mentioned before, click on the add button. Now you need to add a reference to the System.ServiceModel to the project. From solution explorer, right click the references under testservice project, select Add reference. In the Add reference dialog, select the .Net tab, select System.ServiceModel, and click ok Now open program.cs by double clicking on it and add the code to consume the web service to the main method. The modified file looks as follows Right click the testservice project and set as startup project. Click f5 to run the project. See the sample output as follows Publishing WCF service under IIS is similar to publishing ASP.Net application. Publish the application to a folder using Visual studio publishing feature, create a virtual directory and create it as an application. Don’t forget to set the application pool to use ASP.Net version 4. One last thing you need to check is the app.config file you have added to the solution. See the element client under ServiceModel element. There is an endpoint element with address attribute that points to the published service URL. If you permanently host the service under IIS, you can simply change the address parameter to the corresponding one and your application will consume the service. You have seen how easily you can build/consume WCF service. If you need the solution in zipped format, please post your email below.

    Read the article

  • Cannot find System.Web.Script.Service namespace error after upgrading to Visual studio 2010

    - by Gavin
    I've just upgraded a VS 2008 project to VS 2010, converting the project but keeping the target as .NET 3.5 (SP1 is installed). My project worked without issue under VS 2008 on another machine. I've added references to System.Web.Extensions.dll but I'm still getting the following errors from code in the App_Code folder: 1) Cannot find System.Web.Script.Service namespace 2) Type 'System.Web.Script.Services.ScriptService' is not defined. 3) Type 'System.Runtime.Serialization.Json.DataContractJsonSerializer' is not defined. Anyone have any ideas what the problem might be as I'm pretty stumped? :(

    Read the article

  • Download and Share Visual Studio Color Schemes

    - by ScottGu
    As developers we often spend a large part of our day staring at code within Visual Studio.  If you are like me, after awhile the default VS text color scheme starts to get a little boring. The good news is that Visual Studio allows you to completely customize the editor background and text colors to whatever you want – allowing you to tweak them to create the experience that is “just right” for your eyes and personality.  You can then optionally export/import your color scheme preferences to an XML file via the Tools->Import and Export Settings menu command. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] New website that makes it easy to download and share VS color schemes Luke Sampson launched the http://studiostyles.info/ site a week ago (built using ASP.NET MVC 2, ASP.NET 4 and VS 2010). Studiostyles.info enables you to easily browse and download Visual Studio color schemes that others have already created.  The color schemes work for both VS 2008 and VS 2010 (all versions – including the free VS express editions): Color schemes are sorted by popularity and voting (you can vote on whether you find each “hot or not”).  You can click any of the schemes to see screen-shots of it in use for common coding scenarios.  You can then download the color settings for either VS 2010 or VS 2008: You can also optionally upload color schemes of your own if you have a good one you want to share with others.  If you haven’t visited it yet – check it out: http://studiostyles.info/  And thank you Luke Sampson for building it! Hope this helps, Scott

    Read the article

  • How to throw a SqlException(need for mocking)

    - by chobo2
    Hi I am trying to test some exceptions in my project and one of the Exceptions I catch is SQlException. Now It seems that you can't go new SqlException() so I am not sure how I can throw a exception especially without somehow calling the database(and since it is unit tests it is usually advised not to call the database since it is slow). So I am using nunit and moq so I am not sure how to fake this. Edit Based on the answers they seem to all be based on ado.net I am using linq to sql. So that stuff is like behind the scenes. Edit @ Matt Hamilton System.ArgumentException : Type to mock must be an interface or an abstract or non-sealed class. at Moq.Mock`1.CheckParameters() at Moq.Mock`1..ctor(MockBehavior behavior, Object[] args) at Moq.Mock`1..ctor(MockBehavior behavior) at Moq.Mock`1..ctor() Posts to the first line when it tries to mockup var ex = new Mock<System.Data.SqlClient.SqlException>(); ex.SetupGet(e => e.Message).Returns("Exception message");

    Read the article

  • LinkDemand error on webserver when using TraceSource

    - by robertpnl
    Hi, On a webserver (shared hosting provider) I published a website with a ADO.Net Framework model in use with MySql Connector 6.3.1. When I request a page, a Security Exception will be happen with this error messages: "LinkDemand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The Zone of the assembly that failed was: MyComputer ". This exception raised when code collect the listeners of a tracksource: public class MySqlTrace { private static TraceSource source = new TraceSource("mysql"); static MySqlTrace() { foreach (TraceListener listener in source.Listeners) // <-- Exception throw here { // ... } } } The web.config doesn't have any trace data or system.diagnostics. My question is, why will a get a LinkDemand security exception during collecting the source listeners. What can maybe be wrong in here?

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >