Search Results

Search found 14148 results on 566 pages for '2008'.

Page 507/566 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • How can I store HTML in a Doctrine YML fixture

    - by argibson
    I am working with a CMS-type site in Symfony 1.4 (Doctrine 1.2) and one of the things that is frustrating me is not being able to store HTML pages in YML fixtures. Instead I have to create SQL backups of the data if I want to drop and rebuild which is a bit of a pest when Symfony/Doctrine has a fantastic mechanism for doing exactly this. I could write a mechanism that reads in a set of HTML files for each page and fills the data in that way (or even write it as a task). But before I go down that road I am wondering if there is any way for HTML to be stored in a YML fixture so that Doctrine can simply import it into the database. Update: I have tried using symfony doctrine:data-dump and symfony doctrine:data-load however despite the dump correctly creating the fixture with the HTML, the load task appears to 'skip' the value of the column with the HTML and enters everything else into the row. In the database the field doesn't show up as 'NULL' but rather empty so I believe Doctrine is adding the value of the column as ''. Below is a sample of the YML fixture that symfony doctrine:data-dump created. I have tried running symfony doctrine:data-load against various forms of this including removing all the escaped characters (new lines and quotes leaving only angle brackets) but it still doesn't work. Product_69: name: 'My Product' Developer: Developer_30 tagline: 'Text that briefly describes the product' version: '2008' first_published: '' price_code: A79 summary: '' box_image: '' description: "<div id=\"featureSlider\">\n <ul class=\"slider\">\n <li class=\"sliderItem\" title=\"Summary\">\n <div class=\"feature\">\n Some text goes in here</div>\n </li>\n </ul>\n </div>\n" is_visible: true

    Read the article

  • Looking for Recommendations on Version Control System with Intuitive (.NET) API?

    - by John L.
    I'm working on a project which generates (composite) Microsoft Word documents which are comprised of one or more child documents. There are tens of thousands of permutations of the composite documents. Far too many for users to easily manage. Users will need to view/edit the child documents through the app which hides all of the nasty implementation details. A requirement of the system is that the child documents must be version controlled. That is what has been tripping me up. I've been torn between using an off-the-shelf solution or rolling my own. At a minimum, the system needs to support get latest, get specific version, add new, rename and possibly delete. I’ve whiteboarded it enough to realize it won’t be a trivial task to create my own. As far as commercial systems I have VSS and TFS at my disposal. I've played with the TFS API some, but it isn’t as intuitive or well documented as I had hoped. I'm not averse to an open source solution (e.g. SVN), but I have less familiarity with them. Which approach or tool would you recommend? Why? Do you have any links to API documentation you would recommend? Environment: C#, VS2008, SQL Server 2005/2008, low volume (a few hundred operations per day)

    Read the article

  • Why does the MSCVRT library generate conflicts at link time?

    - by neuviemeporte
    I am building a project in Visual C++ 2008, which is an example MFC-based app for a static C++ class library I will be using in my own project soon. While building the Debug configuration, I get the following: warning LNK4098: defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library After using the recommended option (by adding "msvcrt" to the "Ignore specific library" field in the project linker settings for the Debug configuration), the program links and runs fine. However, I'd like to find out why this conflict occured, why do I have to ignore a critical library, and if I'm to expect problems later I if add the ignore, or what happens if I don't (because the program builds anyway). At the same time, the Release configuration warns: warning LNK4075: ignoring '/EDITANDCONTINUE' due to '/OPT:ICF' specification warning LNK4098: defaultlib 'MSVCRTD' conflicts with use of other libs; use /NODEFAULTLIB:library I'm guessing that the "D" suffix means this is the debug version of the vc++ runtime, no idea why this gets used this time. Anyway, adding "msvcrtd" to the ignore field causes lots of link errors of the form: error LNK2001: unresolved external symbol imp_CrtDbgReportW Any insight greatly appreciated.

    Read the article

  • .NET load assembly error metabase config

    - by peter
    Hi, Yesterday I found out that the mailroot directory had moved to a different location on the C drive. I don't know how this happened as no configs were changed, using IIS7 with IIS6 SMTP service on Windows 2008 R2 web edition. I wanted to restore it to the default c:\inetpub\mailroot location. So I opened system32/inetsrv/MetaBase.xml, and in the node was the problem, BadMailDirectory, DropDirectory etc were all pointing to the wrong location... I changed them back to the default c:\inetpub\mailroot. Later I discovered an ASP.NET application was throwing this error: An error occurred in the Microsoft .NET Framework while trying to load assembly id 1. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error: System.IO.FileNotFoundException: Could not load file or assembly 'microsoft.sqlserver.types, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I figured it had to do with my changes to the mailroot locations, so I gave ASP.NET access to the mailroot directory and it was working.... My question is, how is this possible? Why does APS.NET require access to the mailroot directory to load assemblies? Thanks

    Read the article

  • LocalAlloc and LocalRealloc usage

    - by PaulH
    I have a Visual Studio 2008 C++ Windows Mobile 6 application where I'm using a FindFirst() / FindNext() style API to get a collection of items. I do not know how many items will be in the list ahead of time. So, I would like to dynamically allocate an array for these items. Normally, I would use a std::vector<>, but, for other reasons, that's not an option for this application. So, I'm using LocalAlloc() and LocalReAlloc(). What I'm not clear on is if this memory should be marked fixed or moveable. The application runs fine either way. I'm just wondering what's 'correct'. int count = 0; INFO_STRUCT* info = ( INFO_STRUCT* )LocalAlloc( LHND, sizeof( INFO_STRUCT ) ); while( S_OK == GetInfo( &info[ count ] ) { ++count; info = ( INFO_STRUCT* )LocalRealloc( info, sizeof( INFO_STRUCT ) * ( count + 1 ), LHND ); } if( count > 0 ) { // use the data in some interesting way... } LocalFree( info ); Thanks, PaulH

    Read the article

  • Header Setup in SOAP with ASP.NET 3.5 WCF

    - by Adam
    I'm pretty new to SOAP so go easy on me. I'm trying to setup a SOAP service that accepts the following header format: <soap:Header> <wsse:Security> <wsse:UsernameToken wsu:Id='SecurityToken-securityToken'> <wsse:Username>Username</wsse:Username> <wsse:Password>Password</wsse:Password> <wsu:Created>Timestamp</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soap:Header> The application I'm incorporating this service into is an ASP.NET 3.5 web application and I've already setup a SOAP endpoint using WCF. I've setup a basic service to make sure the WCF works and it works fine (disregarding the header). I heard that the above format follows WS-Security so I added WSHttpBinding in the web.config: <service name="Nexternal.Service.XMLTools.VNService" behaviorConfiguration="VNServiceBehavior"> <!--The first endpoint would be picked up from the confirg this shows how the config can be overriden with the service host--> <endpoint address="" binding="wsHttpBinding" contract="Nexternal.Service.XMLTools.IVNService"/> </service> I downloaded a test harness (soapUI) and pasted in a test message with the above header and it came back with a 400 Bad Request error. ...for what it's worth, I'm running Visual Studio 2008 using IIS7. I feel like I'm going in circles so any help would be awesome. Thanks in advance.

    Read the article

  • What am I missing in IIS7?

    - by faded19
    Hello All, Ok here is my dilemma, I have been developing on a shared host at discountasp.net (IIS 6)for some time now. All was going well, however now that app is complete we are moving it to its own dedicated server which is now server 2008 and IIS 7. I am currently using asp forms authentication (which again seems to work just fine on IIS6) The issue seems to occur after I click login, it pops the "Signing In" box..an error then arises in the JavaScript of Membership.js "Object Does not Support Membership.js" I verified that the code was making it to: membership.BeginLogin(uid, pwd, rememberme); and was in fact passing the correct variables. Another odd thing I noticed when setting the forms permissions is that when I went to select Users or Roles within the IIS 7 management console it would take forever, and then time out with the following error: A Network related or instance specific error occurred while establishing a connection to SQL Server. The server was not or was not accessible, verify that the instance name is correct and that SQL Server is configured to allow remote connections (provider - named pipes provider: error 40 - could not open a connection to SQL Server.) I am rather weak on the hardware/configure side of the house so I am not really sure what the issue is, it’s almost as if IIS7 cannot see the DB. They both reside on the same server however. If anyone could help point me in the right direction as to how to resolve this I would be eternally grateful! Thanks in advance Bryan

    Read the article

  • Bibliography behaves strange in lyx.

    - by Orjanp
    Hi! I have created a Bibliography section in my document written in lyx. It uses a book layout. For some reason it did start over again when I added some more entries. The new entries was made some time later than the first ones. I just went down to key-27 and hit enter. Then it started on key-1 again. Does anyone know why it behaves like this? The lyx code is below. \begin{thebibliography}{34} \bibitem{key-6}Lego mindstorms, http://mindstorms.lego.com/en-us/default.aspx \bibitem{key-7}C.A.R. Hoare. Communicating sequential processes. Communications of the ACM, 21(8):666-677, pages 666\textendash{}677, August 1978. \bibitem{key-8}C.A.R. Hoare. Communicating sequential processes. Prentice-Hall, 1985. \bibitem{key-9}CSPBuilder, http://code.google.com/p/cspbuilder/ \bibitem{key-10}Rune Møllegård Friborg and Brian Vinter. CSPBuilder - CSP baset Scientific Workflow Modelling, 2008. \bibitem{key-11}Labview, http://www.ni.com/labview \bibitem{key-12}Robolab, http://www.lego.com/eng/education/mindstorms/home.asp?pagename=robolab \bibitem{key-13}http://code.google.com/p/pycsp/ \bibitem{key-14}Paparazzi, http://paparazzi.enac.fr \bibitem{key-15}Debian, http://www.debian.org \bibitem{key-16}Ubuntu, http://www.ubuntu.com \bibitem{key-17}GNU, http://www.gnu.org \bibitem{key-18}IVY, http://www2.tls.cena.fr/products/ivy/ \bibitem{key-19}Tkinter, http://wiki.python.org/moin/TkInter \bibitem{key-20}pyGKT, http://www.pygtk.org/ \bibitem{key-21}pyQT4, http://wiki.python.org/moin/PyQt4 \bibitem{key-22}wxWidgets, http://www.wxwidgets.org/ \bibitem{key-23}wxPython GUI toolkit, http://www.wxPython.org \bibitem{key-24}Python programming language, http://www.python.org \bibitem{key-25}wxGlade, http://wxglade.sourceforge.net/ \bibitem{key-26}http://numpy.scipy.org/ \bibitem{key-27}http://www.w3.org/XML/ \bibitem{key-1}IVY software bus, http://www2.tls.cena.fr/products/ivy/ \bibitem{key-2}sdas \bibitem{key-3}sad \bibitem{key-4}sad \bibitem{key-5}fsa \bibitem{key-6}sad \bibitem{key-7} \end{thebibliography}

    Read the article

  • full duplex communication over the web w/o flash sockets

    - by aharon
    A web application I'm helping to develop is faced with a well-known problem: we want to be able to let users know of various events and so forth that can occur at any time, essentially at random, and update their view accordingly. Essentially, we need to allow the server to push requests to individual clients, as opposed to the client asking the server. I understand that WebSockets are an effort to address the problem; however, after a bit of looking around into them, I understand that a) very few web browsers currently offer native websocket support; b) to get around this, you either use flash sockets or some sort of AJAX long-polling; c) a special websockets server must be used. Now, we want to offer our service without Flash. And any sort of servers must have some sort of load-balancing capabilities, or at least some software that can do load balancing for them. As of 2008, everyone was saying that Comet-based solutions (e.g., Bayeux) were the way to go for these sort of situations. However, the various protocols seem to have not had much work put into them since then—which leads (finally) to the question. Is Bayeux-flavored Comet still the right tool for jobs like this? If not, what is?

    Read the article

  • Configuring 32-Bit ASP.NET Application on a 64-Bit IIS Server

    - by Tim
    Hello, I’m trying to install a 32-bit ASP.NET application onto a 64-bit IIS server running on Windows Server 2008. This is a clean installation of the operating system with no other applications installed. As a prerequisite for our installation, we run the 32-bit version of aspnet_regiis –i It fails with the following message: The error indicates that IIS is not installed on the machine. Please install IIS before using this tool. Additionally: IIS is definitely installed. The 64 bit version of aspnet_regiis runs cleanly without warnings. “Enable 32 bit applications” is set to True in the DefaultAppPool’s Advanced Settings. The IIS Metabase and IIS 6 configuration compatibility” component is installed. We have a test VM where this error occurs as well as test VM where both the 32 bit and 64 bit versions of aspnet_regiis run without errors. We've had no luck distinguishing the differences between the two test VMs. We have struggled with this issue for several days to no avail. Any suggestions would be greatly appreciated!

    Read the article

  • Can't open Websphere Portal 7.0 Login Page after its integration with a custom user registry?

    - by jack_sparrow
    I am currently working on a project related to Websphere Portal 7.0 on Windows Server 2008 R2,64 bit. I am trying to integrate websphere portal with my custom user registry.I have completed all the steps required to implement custom user registry in portal as given in IBM documentation.I am adding my custom repository to the default federated repositories of Portal 7.0.I have made the required changes under VMM Federated CUR Properties section in wkplc.properties.I am using configengine.bat file to configure Portal with user registry. But even completing all the steps,when I am trying to open the Portal Login Page through http://ip-address:port_of_portal/wps/portal ,it is throwing an exception on the console: "Error 500: com.ibm.wps.resolver.data.exceptions.URIProcessingIOException: EJCBD0021E: The URI dav:fs-type1/themes/PageBuilder2/theme.html and parameters [['themeURI'=, 'mime-type'= could not be processed: [EJCBD0021E: The URI dav:fs-type1/themes/PageBuilder2/theme.html and parameters [['themeURI'=, 'mime-type'= could not be processed: EJPSG0002E: Requested Member does not exist.uid=portal,o=defaultWIMFileBasedRealm/null] " and in logs Systemout.log "EJPSB0005E: Exception occurred during creation of the principal with Name uid=portal,o=defaultWIMFileBasedRealm and Principal Type USER caused by com.ibm.portal.puma.MemberNotFoundException: EJPSG0002E: Requested Member does not exist.uid=portal,o=defaultWIMFileBasedRealm/null". Here,"portal" is administrative user in my custom user registry.I am able to access WAS using /ibm/console through user "portal".Please suggest some way to handle this issue.

    Read the article

  • Invalid iPhone Application Binary

    - by Kristopher Johnson
    I'm trying to upload an application to the iPhone App Store, but I get this error message from iTunes Connect: The binary you uploaded was invalid. The signature was invalid, or it was not signed with an Apple submission certificate. My guess is that it is not properly signed. I have downloaded my App Store distribution certficate, but I can't figure out how to "sign" my application with it. The SDK's documentation about code signing is not very helpful. (FWIW, I can install the app on my iPhone just fine using the development provisioning profile.) However, it is possible that I screwed things up on a more basic level. Here's what I did to try to prepare it for upload: In Xcode, select the Device|Release target Select the target and click the Info button. Change "Code Signing Identity" to "iPhone Distribution", and change "Code Signing Provisioning Profile" to my App Store distribution profile. Build Go to the directory where the built MyApp.app bundle is, control-click and choose "Compress" to create MyApp.zip Upload MyApp.zip to the App Store via iTunes Connect (which resulted in the above error message). Can anybody give me any hints? Edit: Found someone with the same problem. Unfortunately, he won't tell us how he fixed it. http://www.rhonabwy.com/wp/2008/07/18/seattlebus-diary-ongoing-update-saga/#comments http://www.rhonabwy.com/wp/2008/07/22/seattlebus-diary-update-is-pending-review/ (Note: For general information on submitting iPhone applications to the App Store, see Steps to upload an iPhone application to the AppStore.)

    Read the article

  • ASP.NET: Custom MembershipProvider with a custom user table

    - by blahblah
    I've recently started tinkering with ASP.NET MVC, but this question should apply to classic ASP.NET as well. For what it's worth, I don't know very much about forms authentication and membership providers either. I'm trying to write my own MembershipProvider which will be connected to my own custom user table in my database. My user table contains all of the basic user information such as usernames, passwords, password salts, e-mail addresses and so on, but also information such as first name, last name and country of residence. As far as I understand, the standard way of doing this in ASP.NET is to create a user table without the extra information and then a "profile" table with the extra information. However, this doesn't sound very good to me, because whenever I need to access that extra information I would have to make one extra database query to get it. I read in the book "Pro ASP.NET 3.5 in C# 2008" that having a separate table for the profiles is not a very good idea if you need to access the profile table a lot and have many different pages in your website. Now for the problem at hand... As I said, I'm writing my own custom MembershipProvider subclass and it's going pretty well so far, but now I've come to realize that the CreateUser doesn't allow me to create users in the way I'd like. The method only takes a fixed number of arguments and first name, last name and country of residence are not part of them. So how would I create an entry for the new user in my custom table without this information at hand in CreateUser of my MembershipProvider?

    Read the article

  • Missing line number in stack trace eventhough the PDB files are included

    - by Farzad
    This is running me nuts. I have this web service implemented w/ C# using VS 2008. I publish it on IIS. I have modified the release build so the pdb files are copied along with the dlls into the target directory on inetpub. Also web.config file has debug=true. Then I call a web service that throws an exception. The stack trace does not contain the line numbers. I have no idea what I am missing here, any ideas? Additional Info: If I run the web app using VS built-in web server, it works and I get line numbers in stack trace. But if I copy the same files (pdb and dll) that the VS built-in web server is using to IIS, still the line numbers are missing in stack trace. It seems that there is something related to the IIS that ignores the pdb files! Update When I publish to IIS, all the pdb files are published under the bin directory and everything looks fine. But when I go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" under the specific directory related to my project, I can see that the assembly (.dll) files are all there, but there is no pdb files. But this does not happen if I run the project using VS built-in web server. So if I copy the pdb files manually to the temp folder, I can see the line numbers. Any idea why the pdb files are not copied to the temp folder? BTW, when I attach to the worker process I can see that it says Symbols loaded!

    Read the article

  • Sending mail issues. very confusing

    - by Dejan.S
    Hi my name is what, my name is who.. ops got carried away Now this might be a serverfault question and a stackoverflow question but I will go with it here because I don't really know the answer. I been sending mail a lot with asp.net before and never had problems like this before. I have setup a mail with this following code var list = new List<string> { "mail", "mail", "mail", "mail" }; var smtp = new SmtpClient("localhost", 25); var plainText = txtPlain.Text; var htmlText = Server.HtmlDecode(FCKeditor1.Value); foreach (var email in list) { var message = new MailMessage() { From = new MailAddress("my server mail"), ReplyTo = new MailAddress("mail") }; var mailMessage = Server.HtmlDecode(FCKeditor1.Value); message.To.Add(email); message.Subject = "Hi Enzorit"; message.Body = mailMessage; message.IsBodyHtml = true; message.BodyEncoding = System.Text.Encoding.GetEncoding("iso-8859-2"); var alternateViewHtml = AlternateView.CreateAlternateViewFromString(htmlText, null, MediaTypeNames.Text.Html); var alternateViewPlainText = AlternateView.CreateAlternateViewFromString(plainText, null, MediaTypeNames.Text.Plain); message.AlternateViews.Add(alternateViewHtml); message.AlternateViews.Add(alternateViewPlainText); smtp.Send(message); } now the issue becomes that some email clients get just plain while some get the html. Like on my hotmail on the computer i get the html but on my iphone i get the plain one. Why is that? and like that wasn't enough The mail wont deliver to some mails like any .pl email. Now here is where I am thinking that it might be a reverse DNS setup thing on my windows server 2008 issue + some company mails, it becomes spam, i had same issue with hotmail but that was solved when I added the plain. Anybody have had the problem before? I am very thankful for any answer I get.. thanks

    Read the article

  • Securely erasing a file using simple methods?

    - by Jason
    Hello, I am using C# .NET Framework 2.0. I have a question relating to file shredding. My target operating systems are Windows 7, Windows Vista, and Windows XP. Possibly Windows Server 2003 or 2008 but I'm guessing they should be the same as the first three. My goal is to securely erase a file. I don't believe using File.Delete is secure at all. I read somewhere that the operating system simply marks the raw hard-disk data for deletion when you delete a file - the data is not erased at all. That's why there exists so many working methods to recover supposedly "deleted" files. I also read, that's why it's much more useful to overwrite the file, because then the data on disk actually has to be changed. Is this true? Is this generally what's needed? If so, I believe I can simply write the file full of 1's and 0's a few times. I've read: http://www.codeproject.com/KB/files/NShred.aspx http://blogs.computerworld.com/node/5756 http://blogs.computerworld.com/node/5687 http://stackoverflow.com/questions/4147775/securely-deleting-a-file-in-c-net

    Read the article

  • How do I cast <T> to varbinary and be still be able to perform a CONVERT on the sql side? Implicatio

    - by Biff MaGriff
    Hello, I'm writing this application that will allow a user to define custom quizzes and then allow another user to respond to the questions. Each question of the quiz has a corresponding datatype. All the responses to all the questions are stored vertically in my [Response] table. I currently use 2 fields to store the response. //Response schema ResponseID int QuizPersonID int QuestionID int ChoiceID int //maps to Choice table, used for drop down lists ChoiceValue varbinary(MAX) //used to store a user entered value I'm using .net 3.5 C# SQL Server 2008. I'm thinking that I would want to store different datatypes in the same field and then in my SQL report proc I would CONVERT to the proper datatype. I'm thinking this is ideal because I only have to check one field. I'm also thinking it might be more trouble than it is worth. I think my other options are to; store the data as strings in the db (yuck), or to have a column for each datatype I might use. So what I would like to know is, how would I format my datatypes in C# so that they can be converted properly in SQL? What is the performance hit for converting in SQL? Should I just make a whole wack of columns for each datatype?

    Read the article

  • How do I do nested transactions in NHibernate?

    - by Gavin Schultz-Ohkubo
    Can I do nested transactions in NHibernate, and how do I implement them? I'm using SQL Server 2008, so support is definitely in the DBMS. I find that if I try something like this: using (var outerTX = UnitOfWork.Current.BeginTransaction()) { using (var nestedTX = UnitOfWork.Current.BeginTransaction()) { ... do stuff nestedTX.Commit(); } outerTX.Commit(); } then by the time it comes to outerTX.Commit() the transaction has become inactive, and results in a ObjectDisposedException on the session AdoTransaction. Are we therefore supposed to create nested NHibernate sessions instead? Or is there some other class we should use to wrap around the transactions (I've heard of TransactionScope, but I'm not sure what that is)? I'm now using Ayende's UnitOfWork implementation (thanks Sneal). Forgive any naivety in this question, I'm still new to NHibernate. Thanks! EDIT: I've discovered that you can use TransactionScope, such as: using (var transactionScope = new TransactionScope()) { using (var tx = UnitOfWork.Current.BeginTransaction()) { ... do stuff tx.Commit(); } using (var tx = UnitOfWork.Current.BeginTransaction()) { ... do stuff tx.Commit(); } transactionScope.Commit(); } However I'm not all that excited about this, as it locks us in to using SQL Server, and also I've found that if the database is remote then you have to worry about having MSDTC enabled... one more component to go wrong. Nested transactions are so useful and easy to do in SQL that I kind of assumed NHibernate would have some way of emulating the same...

    Read the article

  • First-chance exception at std::set dectructor

    - by bartek
    Hi, I have a strange exception at my class destructor: First-chance exception reading location 0x00000 class DispLst{ // For fast instance existance test std::set< std::string > instances; [...] DispLst::~DispLst(){ this->clean(); DeleteCriticalSection( &instancesGuard ); } <---- here instances destructor raises exception Call stack: X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::begin() Line 556 + 0xc bytes C++ X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::_Tidy() Line 1421 + 0x64 bytes C++ X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::~_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 () Line 541 C++ X.exe!std::set,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ::~set,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator () + 0x2b bytes C++ X.exe!DispLst::~DispLst() Line 82 + 0xf bytes C++ The exact place of error in xtree: void _Tidy() { // free all storage erase(begin(), end()); <------------------- HERE this->_Alptr.destroy(&_Left(_Myhead)); this->_Alptr.destroy(&_Parent(_Myhead)); this->_Alptr.destroy(&_Right(_Myhead)); this->_Alnod.deallocate(_Myhead, 1); _Myhead = 0, _Mysize = 0; } iterator begin() { // return iterator for beginning of mutable sequence return (_TREE_ITERATOR(_Lmost())); <---------------- HERE } What is going on ? I'm using Visual Studio 2008.

    Read the article

  • Error while installing Microsoft SharePoint 2010 on Windows 7 machine

    - by Brian Roisentul
    I've installed Microsoft SharePoint 2010 on my Windows 7 64bits machine. I've modified the config.xml file to accomplish this. Once it's installed I run the Configuration Wizard to create a new site and it throws me the following exception: An exception of type System.IO.FileNotFoundException was thrown. Additional exception information: Could not load file or assembly 'Microsoft.IdentityModel, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.IdentityModel, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. File name: 'Microsoft.IdentityModel, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' at Microsoft.SharePoint.Administration.SPFarm.CurrentUserIsAdministrator(Boolean allowContentApplicationAccess) at Microsoft.SharePoint.Administration.SPConfigurationDatabase.Microsoft.SharePoint.Administration.ISPPersistedStoreProvider.DoesCurrentUserHaveWritePermission(SPPersistedObject persistedObject) at Microsoft.SharePoint.Administration.SPPersistedObject.BaseUpdate() at Microsoft.SharePoint.Administration.SPFarm.Update() at Microsoft.SharePoint.Administration.SPConfigurationDatabase.RegisterDefaultDatabaseServices(SqlConnectionStringBuilder connectionString) at Microsoft.SharePoint.Administration.SPConfigurationDatabase.Provision(SqlConnectionStringBuilder connectionString) at Microsoft.SharePoint.Administration.SPFarm.Create(SqlConnectionStringBuilder configurationDatabase, SqlConnectionStringBuilder administrationContentDatabase, IdentityType identityType, String farmUser, SecureString farmPassword, SecureString masterPassphrase) at Microsoft.SharePoint.PostSetupConfiguration.ConfigurationDatabaseTask.CreateOrConnectConfigDb() at Microsoft.SharePoint.PostSetupConfiguration.ConfigurationDatabaseTask.Run() at Microsoft.SharePoint.PostSetupConfiguration.TaskThread.ExecuteTask() Note: I don't have the Microsoft SQL Server 2008 64bits installed, but SharePoint 2010 seems that installed its necessary components.

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • Impact of ordering of correlated subqueries within a projection

    - by Michael Petito
    I'm noticing something a bit unexpected with how SQL Server (SQL Server 2008 in this case) treats correlated subqueries within a select statement. My assumption was that a query plan should not be affected by the mere order in which subqueries (or columns, for that matter) are written within the projection clause of the select statement. However, this does not appear to be the case. Consider the following two queries, which are identical except for the ordering of the subqueries within the CTE: --query 1: subquery for Color is second WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; --query 2: subquery for Color is first WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; If you look at the two query plans, you'll see that an outer join is used for each subquery and that the order of the joins is the same as the order the subqueries are written. There is a filter applied to the result of the outer join for color, to filter out rows where the color is not 'Gray'. (It's odd to me that SQL would use an outer join for the color subquery since I have a non-null constraint on the result of the color subquery, but OK.) Most of the rows are removed by the color filter. The result is that query 2 is significantly cheaper than query 1 because fewer rows are involved with the second join. All reasons for constructing such a statement aside, is this an expected behavior? Shouldn't SQL server opt to move the filter as early as possible in the query plan, regardless of the order the subqueries are written?

    Read the article

  • Are there size limitations to the .NET Assembly format?

    - by McKAMEY
    We ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which .NET Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. aspnet_merge.exe / Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! My question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge.exe gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous.

    Read the article

  • How to create a high quality icon for my Windows application?

    - by Patrick Klug
    If you are running Windows with a higher DPI setting you will notice that most application icons on the desktop look terrible. Even high profile application icons such as Google Chrome look terrible while for example Firefox, Skype and MS Office icons look sharp: (example) I suspect that most icons look blurry because a lower resolution icon is scaled up rather than using a higher resolution icon. I want to give my application a high quality icon and can't seem to convince Windows to use the higher resolution icon. I have created a multi-resolution icon with the free icon editor IcoFX. The icon is provided in 16x16, 24x24, 32x32,48x48, 128x128 and 256x256 (!) (all in 32 bit including alpha channel) yet Windows seems to use the 128x128 version of the icon on the desktop and scale it up which looks terrible. (I am using Windows 7 - 64 bit - the icon is placed by means of setting up a shortcut in the msi (created via Visual Studio 2008 Setup Project) and pointing it to the .ico file that contains the multi-resolution icon) I have tried removing the 128x128 icon but to no avail. Interestingly in Windows Explorer the icon looks great even when using the Extra Large Icon setting. How can I create a high quality desktop icon that looks great on higher DPI settings on Windows?

    Read the article

  • Problem with Unit testing of ASP.NET project (NullReferenceException when running the test)

    - by Alex
    Hi, I'm trying to create a bunch of MS visual studio unit tests for my n-tiered web app but for some reason I can't run those tests and I get the following error - "Object reference not set to an instance of an object" What I'm trying to do is testing of my data access layer where I use LINQ data context class to execute a certain function and return a result,however during the debugging process I found out that all the tests fail as soon as they get to the LINQ data context class and it has something to do with the connection string but I cant figure out what is the problem. The debugging of tests fails here(the second line): public EICDataClassesDataContext() : base(global::System.Configuration.ConfigurationManager.ConnectionStrings["EICDatabaseConnectionString"].ConnectionString, mappingSource) { OnCreated(); } And my test is as follows: TestMethod()] public void OnGetCustomerIDTest() { FrontLineStaffDataAccess target = new FrontLineStaffDataAccess(); // TODO: Initialize to an appropriate value string regNo = "jonh"; // TODO: Initialize to an appropriate value int expected = 10; // TODO: Initialize to an appropriate value int actual; actual = target.OnGetCustomerID(regNo); Assert.AreEqual(expected, actual); } The method which I call from DAL is: public int OnGetCustomerID(string regNo) { using (LINQDataAccess.EICDataClassesDataContext dataContext = new LINQDataAccess.EICDataClassesDataContext()) { IEnumerable<LINQDataAccess.GetCustomerIDResult> sProcCustomerIDResult = dataContext.GetCustomerID(regNo); int customerID = sProcCustomerIDResult.First().CustomerID; return customerID; } } So basically everything fails after it reaches the 1st line of DA layer method and when it tries to instantiate the LINQ data access class... I've spent around 10 hours trying to troubleshoot the problem but no result...I would really appreciate any help! UPDATE: Finally I've fixed this!!!!:) I dont know why but for some reasons in the app.config file the connection to my database was as follows: AttachDbFilename=|DataDirectory|\EICDatabase.MDF So what I did is I just changed the path and instead of |DataDirectory| I put the actual path where my MDF file sits,i.e C:\Users\1\Documents\Visual Studio 2008\Projects\EICWebSystem\EICWebSystem\App_Data\EICDatabase.mdf After I had done that it worked out!But still it's a bit not clear what was the problem...probably incorrect path to the database?My web.config of ASP.NET project contains the |DataDirectory|\EICDatabase.MDF path though..

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >