Search Results

Search found 4650 results on 186 pages for 'multi lingual'.

Page 154/186 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • 4.0/WCF: Best approach for bi-idirectional message bus?

    - by TomTom
    Just a technology update, now that .NET 4.0 is out. I write an application that communicates to the server through what is basically a message bus (instead of method calls). This is based on the internal architecture of the application (which is multi threaded, passing the messages around). There are a limited number of messages to go from the client to the server, quite a lot more from the server to the client. Most of those can be handled via a separate specialized mechanism, but at the end we talk of possibly 10-100 small messages per second going from the server to the client. The client is supposed to operate under "internet conditions". THis means possibly home end users behind standard NAT devices (i.e. typical DSL routers) - a firewalled secure and thus "open" network can not be assumed. I want to have as little latency and as little overhad for the communication as possible. What is the technologally best way to handle the message bus callback? I Have no problem regularly calling to the server for message delivery if something needs to be sent... ...but what are my options to handle the messagtes from the server to the client? WsDualHttp does work how? Especially under a NAT scenario? Just as a note: polling is most likely out - the main problem here is that I would have a significant overhead OR a significant delay, both aren ot really wanted. Technically I would love some sort of streaming appraoch, where the server can write messags to a stream while he generates them and they get sent to the client as they come. Not esure this is doable with WCF, though (if not, I may acutally decide to handle the whole message part outside of WCF and just do control / login / setup / destruction via WCF).

    Read the article

  • Problem Linking Boost Filesystem Library in Microsoft Visual C++

    - by Scott
    Hello. I am having trouble getting my project to link to the Boost (version 1.37.0) Filesystem lib file in Microsoft Visual C++ 2008 Express Edition. The Filesystem library is not a header-only library. I have been following the Getting Started on Windows guide posted on the official boost web page. Here are the steps I have taken: I used bjam to build the complete set of lib files using: bjam --build-dir="C:\Program Files\boost\build-boost" --toolset=msvc --build-type=complete I copied the /libs directory (located in C:\Program Files\boost\build-boost\boost\bin.v2) to C:\Program Files\boost\boost_1_37_0\libs. In Visual C++, under Project Properties Additional Library Directories I added these paths: C:\Program Files\boost\boost_1_37_0\libs C:\Program Files\boost\boost_1_37_0\libs\filesystem\build\msvc-9.0express\debug\link-static\threading-multi I added the second one out of desperation. It is the exact directory where libboost_system-vc90-mt-gd-1_37.lib resides. In Configuration Properties C/C++ General Additional Include Directories I added the following path: C:\Program Files\boost\boost_1_37_0 Then, to put the icing on the cake, under Tools Options VC++ Directories Library files, I added the same directories mentioned in step 3. Despite all this, when I build my project I get the following error: fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-gd-1_37.lib' Additionally, here is the code that I am attempting to compile as well as a screen shot of the aformentioned directory where the (assumedly correct) lib file resides: #include "boost/filesystem.hpp" // includes all needed Boost.Filesystem declarations #include <iostream> // for std::cout using boost::filesystem; // for ease of tutorial presentation; // a namespace alias is preferred practice in real code using namespace std; int main() { cout << "Hello, world!" << endl; return 0; } Can anyone help me out? Let me know if you need to know anything else. As always, thanks in advance.

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Compiling and linking libcurl to create a stand alone dll

    - by Haraldo
    Hi, I've managed to compile a dll with the necessary linked libraries (*.lib) and with CURL_STATICLIB set in the preprocessor section among other settings. I'm using "libcurl-7.19.3-win32-ssl-msvc.zip" package and compiling with VS 2008 express. This has been the first version I managed to get compiled properly with no link issues etc. The problem I have now is that my dll needs libcurl.dll to function and this is not ok. My dll needs to be independent. I have no idea how to implement this. I've taken all day just to get what I've got compiled. I've got runtime library set to Multi threaded dll (debug/release) respectively under C/C++ - code generation. I've a number of preprocessor set - CURL_STATICLIB being one of them. Configuration Type is set to Dynamic Library Use of MFC is set to Use MFC in a static library Additional Library Directories is set to the lib folders (debug/release) respectively. I've noticed there is a curllib_static.lib file which I've tried instead of curllib.lib as an additional dependency but it only compiles with the later. This is driving me nuts! So I guess I need some guidance as to how to make my dll completely static so it doesn't have any dependencies. I notice my dll is currently dependent on: CURLLIB.DLL MSVCR90D.DLL As I'm pretty new to C++ it could be a setting I'm missing in VS 2008 but I'm not sure. One person said I should be using a static library with *.a files (libcurl.a) etc but when I do this I get link errors which I haven't been able to resolve. Any guidance here would be much appreciated.

    Read the article

  • Database sharing/versioning

    - by DarkJaff
    Hi everyone, I have a question but I'm not sure of the word to use. My problem: I have an application using a database to stock information. The database can ben in access (local) or in a server (SQL Server or Oracle). We support these 3 kind of database. We want to give the possibility to the user to do what I think we can call versioning. Let me explain : We have a database 1. This is the master. We want to be able to create a database 2 that will be the same thing as database 1 but we can give it to someone else. They each work on each other side, adding, modifying and deleting records on this very complex database. After that, we want the database 1 to include the change from database 2, but with the possibility to dismiss some of the change. For you information, ou application is already multiuser so why don't we just use this multi-user and forget about this versionning? It's because sometimes, we need to give a copy of the database to another company on another site and they can't connect on our server. They work on their side and then, we want to merge. Is there anyone here with experience with this type of requirement? We have a lot of ideas but most of them require a LOT of work, massive modification to the database or to the existing queries. This is a 2 millions and growing C++ app, so rewriting it is not possible! Thanks for any ideas that you may give us! J-F

    Read the article

  • Reformat SQLGeography polygons to JSON

    - by James
    I am building a web service that serves geographic boundary data in JSON format. The geographic data is stored in an SQL Server 2008 R2 database using the geography type in a table. I use [ColumnName].ToString() method to return the polygon data as text. Example output: POLYGON ((-6.1646509904325884 56.435153006374627, ... -6.1606079906751 56.4338050060666)) MULTIPOLYGON (((-6.1646509904325884 56.435153006374627 0 0, ... -6.1606079906751 56.4338050060666 0 0))) Geographic definitions can take the form of either an array of lat/long pairs defining a polygon or in the case of multiple definitions, an array or polygons (multipolygon). I have the following regex that converts the output to JSON objects contained in multi-dimensional arrays depending on the output. Regex latlngMatch = new Regex(@"(-?[0-9]{1}\.\d*)\s(\d{2}.\d*)(?:\s0\s0,?)?", RegexOptions.Compiled); private string ConvertPolysToJson(string polysIn) { return this.latlngMatch.Replace(polysIn.Remove(0, polysIn.IndexOf("(")) // remove POLYGON or MULTIPOLYGON .Replace("(", "[") // convert to JSON array syntax .Replace(")", "]"), // same as above "{lng:$1,lat:$2},"); // reformat lat/lng pairs to JSON objects } This is actually working pretty well and converts the DB output to JSON on the fly in response to an operation call. However I am no regex master and the calls to String.Replace() also seem inefficient to me. Does anyone have any suggestions/comments about performance of this?

    Read the article

  • Rolling Back a Transaction with MySQL Connector in VB.net

    - by Jonathan
    Hey all- I have one multi-row INSERT statement (300 or so sets of values) that I would like to commit to the MySQL database in an all-or-nothing fashion. insert into table VALUES (1, 2, 3), (4, 5, 6), (7, 8, 9); In some cases, a set of values in the command will not meet the criteria of the table (duplicate key, for example). When that happens I do not want any of the previous sets added to the database. I've implemented this with the following code, however, my rollback command doesn't appear to be making a difference. I've used this documentation: http://dev.mysql.com/doc/refman/5.0/es/connector-net-examples-mysqltransaction.html Dim transaction As MySqlTransaction = sqlConnection.BeginTransaction() sqlCommand = New MySqlCommand(insertStr, sqlConnection, transaction) Try sqlCommand.ExecuteNonQuery() Catch ex As Exception writeToLog("EXCEPTION: " & ex.Message & vbNewLine) writeToLog("Could not execute " & sqlCmd & vbNewLine) Try transaction.Rollback() writeToLog("All statements were rolled back." & vbNewLine) Return False Catch rollbackEx As Exception writeToLog("EXCEPTION: " & rollbackEx.Message & vbNewLine) writeToLog("All statements were not rolled back." & vbNewLine) Return False End Try End Try transaction.commit() I get the DUPLICATE KEY exception thrown, no Rollback Exception thrown, and every set of values up to duplicate key committed to the database. What am I doing wrong? Thanks- Jonathan

    Read the article

  • Where and how to start on C# and .Net Framework ?

    - by Rachel
    Currently, I have been working as an PHP developer for approximately 1 year now and I want learn about C# and .Net Framework, I do not have any experience with .Net Framework and C# and also there is not firm basis as to why I am going for C# and .Net Framework vs Java or any other programming languages, this decision is mere on career point of view and job opportunities. So my question is about: Is my decision wise to go for C# and .Net Framework route after working for sometime as an PHP Developer ? What are the good resources which I can refer and learn from to get knowledge on C# and .NET Framework ? How should I go about learning on C# and .NET Framework ? What all technologies should I be learning OR have experience with to be considered as an C#/.Net Developer, I am mentioning some technologies, please add or suggest one, if am missing out any ? Technologies C#-THE LANGUAGE GUI APPLICATION DEVELOPMENT WINDOWS CONTROL LIBRARY DELEGATES DATA ACCESS WITH ADO.NET MULTI THREADING ASSEMBLIES WINDOWS SERVICES VB INTRODUCTION TO VISUAL STUDIO .NET WINDOWS CONTROL LIBRARY DATA ACCESS WITH ADO.NET ASP.NET WEB TECHNOLOGIES CONTROLS VALIDATION CONTROL STATE MANAGEMENT CACHING ASP.NET CONFIGURATION ADO.NET ASP.NET TRACING & SECURITY IN ASP.NET XMLPROGRAMMING WEB SERVICES CRYSTAL REPORTS SSRS (SQL Server Reporting Services) MS-Reports LINQ: NET Language-Integrated Query NET Language-Integrated Query LINQ to SQL: SQL Integration WCF: Windows Communication Foundation What Is Windows Communication Foundation? Fundamental Windows Communication Foundation Concepts Windows Communication Foundation Architecture WPF: Windows Presentation Foundation Getting Started (WPF) Application Development WPF Fundamentals What are your thoughts, suggestions on this and from Job and Market Perspective, what areas of C#/.Net Development should I put my focus on ? I know this is very subjective and long question but advice would be highly appeciated. Thanks.

    Read the article

  • Request for the permission of type 'System.Web.AspNetHostingPermission' when compiling web site

    - by ahsteele
    I have been using Windows 7 for a while but have not had to work with a particular legacy intranet application since my upgrade. Unfortunately, this application is setup as an ASP.NET Website project hosted on a remote server. When I have the website open in Visual Studio 2008 and try to debug it: Request for the permission of type 'System.Web.AspNetHostingPermission' failed To resolve this issue on Windows Vista machines I would change the machine's .NET Security Configuration to trust the local intranet. I believe this configuration utility relied upon the mscorcfg.msc which from some cursory research appears to be apart of the .NET 2.0 SDK. I have tried to follow the instructions from this Microsoft Support article running the command below to no avail. Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on Presently, I have the following .NET and ASP.NET components installed on my machine Microsoft .NET Compact Framework 2.0 SP2 Microsoft .NET Compact Framework 3.5 Microsoft .NET Framework 4 Client Profile Microsoft .NET Framework 4 Extended Microsoft .NET Framework 4 Multi-Targeting Pack Microsoft ASP.NET MVC 1.0 Microsoft ASP.NET MVC 2 Microsoft ASP.NET MVC 2 - Visual Studio 2008 Tools Microsoft ASP.NET MVC 2 - Visual Studio 2010 Tools Do I need to install the .NET 2.0 SDK? Am I issuing the caspol command incorrectly? Is there something else that I am missing?

    Read the article

  • Would it be possible to have a UTF-8-like encoding limited to 3 bytes per character?

    - by dan04
    UTF-8 requires 4 bytes to represent characters outside the BMP. That's not bad; it's no worse than UTF-16 or UTF-32. But it's not optimal (in terms of storage space). There are 13 bytes (C0-C1 and F5-FF) that are never used. And multi-byte sequences that are not used such as the ones corresponding to "overlong" encodings. If these had been available to encode characters, then more of them could have been represented by 2-byte or 3-byte sequences (of course, at the expense of making the implementation more complex). Would it be possible to represent all 1,114,112 Unicode code points by a UTF-8-like encoding with at most 3 bytes per character? If not, what is the maximum number of characters such an encoding could represent? By "UTF-8-like", I mean, at minimum: The bytes 0x00-0x7F are reserved for ASCII characters. Byte-oriented find / index functions work correctly. You can't find a false positive by starting in the middle of a character like you can in Shift-JIS.

    Read the article

  • MySql scoping problem with correlated subqueries

    - by Rolf
    Hi, I'm having this Mysql query, It works: SELECT nom ,prenom ,(SELECT GROUP_CONCAT(category_en) FROM (SELECT DISTINCT category_en FROM categories c WHERE id IN (SELECT DISTINCT category_id FROM m3allems_to_categories m2c WHERE m3allem_id = 37) ) cS ) categories ,(SELECT GROUP_CONCAT(area_en) FROM (SELECT DISTINCT area_en FROM areas c WHERE id IN (SELECT DISTINCT area_id FROM m3allems_to_areas m2a WHERE m3allem_id = 37) ) aSq ) areas FROM m3allems m WHERE m.id = 37 The result is: nom prenom categories areas Man Multi Carpentry,Paint,Walls Beirut,Baalbak,Saida It works correclty, but only when i hardcode into the query the id that I want (37). I want it to work for all entries in the m3allem table, so I try this: SELECT nom ,prenom ,(SELECT GROUP_CONCAT(category_en) FROM (SELECT DISTINCT category_en FROM categories c WHERE id IN (SELECT DISTINCT category_id FROM m3allems_to_categories m2c WHERE m3allem_id = m.id) ) cS ) categories ,(SELECT GROUP_CONCAT(area_en) FROM (SELECT DISTINCT area_en FROM areas c WHERE id IN (SELECT DISTINCT area_id FROM m3allems_to_areas m2a WHERE m3allem_id = m.id) ) aSq ) areas FROM m3allems m And I get an error: Unknown column 'm.id' in 'where clause' Why? From the MySql manual: 13.2.8.7. Correlated Subqueries [...] Scoping rule: MySQL evaluates from inside to outside. So... do this not work when the subquery is in a SELECT section? I did not read anything about that. Does anyone know? What should I do? It took me a long time to build this query... I know it's a monster query but it gets what I want in a single query, and I am so close to getting it to work! Can anyone help?

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • Linking problems using libcurl with Visual C++ 2005: "unresolved external symbol __imp__curl_easy_se

    - by user88595
    Hi, I am planning to use libcurl in my project. I had downloaded the library source,built and integrated it in a small POC application. I am able to build and run the application without any issues with the generated libcurl.dll and libcurl_imp.lib files. Now when I integrate the same library in my project I am getting linker errors. 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_setopt 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_perform 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_cleanup 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_global_init 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_init I have researched and tried all manners of workarounds like adding CURL_STATICLIB definitions , additional libraries , changing to /MT even copying the libs to the release directory but nothing seems to work. As far as I can see the only difference between approach #1 and #2 in my steps are #1 is an console application using the libcurl.dll while in my main project this is another dll which is trying to link to libcurl.dll.. Would that necessitate any change in approach? Can I use the same generated multi threaded DLL /MD file for both(Tried /MT also with no success)? Any other ideas? Following are the linker options. -------------------------------------------------Working------------------------------------------------- /OUT:"C:\SampleFTP\Release\SampleFTP.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\SampleFTP\SampleFTP\Release" /MANIFEST /MANIFESTFILE:"Release\SampleFTP.exe.intermediate.manifest" /DEBUG /PDB:"c:\SampleFTP\release\SampleFTP.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /LTCG /MACHINE:X86 /ERRORREPORT:PROMPT libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib -------------------------------------------------Working------------------------------------------------- ----------------------------------------------NotWorking------------------------------------------------- /OUT:".......\nt\Win32\Release/foo__tests.dll" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\FullLibPath\libcurl_libs" /LIBPATH:"......\nt\Win32\Release" /DLL /MANIFEST /MANIFESTFILE:".\foo_tests\Win32\Release\foo_tests.dll.intermediate.manifest" /DEBUG /PDB:".......\nt\Win32\Release/foo_tests.pdb" /OPT:REF /OPT:ICF /LTCG /IMPLIB:".......\nt\Win32\Release/foo_tests.lib" /MACHINE:X86 /ERRORREPORT:PROMPT odbc32.lib odbccp32.lib util_process.lib wsock32.lib Version.lib libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib "......\nt\win32\release\otherlib1.lib" "......\nt\win32\release\otherlib2.lib" ----------------------------------------------NotWorking-------------------------------------------------

    Read the article

  • What's the compelling reason to upgrade to Visual Studio 2010 from VS2008?

    - by Cheeso
    Are there new features in Visual Studio 2010 that are must-haves? If so, which ones? For me, the big draws for VS2008 as compared to VS2005 were LINQ, .NET Framework multitargeting, WCF (REST + Syndication), and general devenv.exe reliability. Granted, some of these features are framework things, and not tool things. For the purposes of this discussion, I'm willing to combine them into one bucket. What is the list of must-have features for VS2010 versus VS2008? Are there any? I am particularly interested in C#. Update: I know how to google, so I can get the official list from Microsoft. I guess what I really wanted was, the assessment from people using it, as to which things are really notable. Microsoft went on for 3 pages about 2008/3.5 features, and many people sort of boiled it down to LINQ, and a few other things. What is that short list for VS2010? Summary so far, what people think is cool or compelling: Visual Studio engine multi-monitor support new extensibility model based on WPF, prettier and more usable new TFS stuff, incl automated test tools parallel debugging .NET Framework parallel extensions for .NET C# 4.0 generic variance optional and named params easier interop with non-managed environments, like COM or Javascript VB 10.0 collection and array literals / initializers automatic properties anonymous methods / statement lambdas I read up on these at Zander's blog. He described these and other features. Nobody on this list said anything about: Visual Studio engine F# support Javascript code-completion JQuery is now included UML better Sharepoint capabilities C++ moves to msbuild project files

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • Cocoa-Created PDF Not Rendering Correctly

    - by Matthew Roberts
    I've created a PDF on the iPad, but the problem is when you have a line of text greater than 1 line, the content just goes off the page. This is my code: void CreatePDFFile (CGRect pageRect, const char *filename) { CGContextRef pdfContext; CFStringRef path; CFURLRef url; CFMutableDictionaryRef myDictionary = NULL; path = CFStringCreateWithCString (NULL, filename, kCFStringEncodingUTF8); url = CFURLCreateWithFileSystemPath (NULL, path, kCFURLPOSIXPathStyle, 0); CFRelease (path); myDictionary = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); NSString *foos = @"Title"; const char *text = [foos UTF8String]; CFDictionarySetValue(myDictionary, kCGPDFContextTitle, CFSTR(text)); CFDictionarySetValue(myDictionary, kCGPDFContextCreator, CFSTR("Author")); pdfContext = CGPDFContextCreateWithURL (url, &pageRect, myDictionary); CFRelease(myDictionary); CFRelease(url); CGContextBeginPage (pdfContext, &pageRect); CGContextSelectFont (pdfContext, "Helvetica", 12, kCGEncodingMacRoman); CGContextSetTextDrawingMode (pdfContext, kCGTextFill); CGContextSetRGBFillColor (pdfContext, 0, 0, 0, 1); NSString *body = @"text goes here"; const char *text = [body UTF8String]; CGContextShowTextAtPoint (pdfContext, 30, 750, text, strlen(text)); CGContextEndPage (pdfContext); CGContextRelease (pdfContext);} Now the issue lies within me writing the text to the page, but the problem is that I can't seem to specify a multi-line "text view".

    Read the article

  • django powering multiple shops from one code base on a single domain

    - by imanc
    Hey, I am new to django and python and am trying to figure out how to modify an existing app to run multiple shops through a single domain. Django's sites middleware seems inappropriate in this particular case because it manages different domains, not sites run through the same domain, e.g. : domain.com/uk domain.com/us domain.com/es etc. Each site will need translated content - and minor template changes. The solution needs to be flexible enough to allow for easy modification of templates. The forms will also need to vary a bit, e.g minor variances in fields and validation for each country specific shop. I am thinking along the lines of the following as a solution and would love some feedback from experienced django-ers: In short: same codebase, but separate country specific urls files, separate templates and separate database Create a middleware class that does IP localisation, determines the country based on the URL and creates a database connection, e.g. /au/ will point to the au specific database and so on. in root urls.py have routes that point to a separate country specific routing file, e..g (r'^au/',include('urls_au')), (r'^es/',include('urls_es')), use a single template directory but in that directory have a localised directory structure, e.g. /base.html and /uk/base.html and write a custom template loader that looks for local templates first. (or have a separate directory for each shop and set the template directory path in middleware) use the django internationalisation to manage translation strings throughout slight variances in forms and models (e.g. ZA has an ID field, France has 'door code' and 'floor' etc.) I am unsure how to handle these variations but I suspect the tables will contain all fields but allowing nulls and the model will have all fields but allowing nulls. The forms will to be modified slightly for each shop. Anyway, I am keen to get feedback on the best way to go about achieving this multi site solution. It seems like it would work, but feels a bit "hackish" and I wonder if there's a more elegant way of getting this solution to work. Thanks, imanc

    Read the article

  • Why won't my styles show in a Dynamics CRM 4 IFRAME?

    - by Dan Crowther
    I have created a web page (ASP.NET) that includes a stylesheet to mimic Dynamics CRM styles. This is to be used in a CRM IFRAME (within a form). The stylesheet is referenced as follows: <head id="Head1" runat="server"> <link href="Styles.css" rel="stylesheet" type="text/css" /> </head> When I load the page in Visual Studio, all is well. When I load it in CRM, none of the styles are shown and no images are displayed. If I browse directly to the image, I get a 404 error. However, the pages function correctly. I have set read permissions for "Everyone" on the server to see if that was causing a problem but it didn't help. I also tried putting a plain HTML page in the folder and that won't load either - again a 404. The page is installed in the ISV folder ..../isv/MyProject. Can anyone help? EDIT This is on a multi-tenancy system. On my test company (testcompany) if I browse to http://crm/testcompany/isv/MyProject/MyPage.aspx, the page is loaded (without styles and images). If I browse to http://crm/testcompany/isv/MyProject/TestImage.gif, the image is not shown. If I browse to http://crm/isv/MyProject/TestImage.gif, the image is shown. Does this suggest a problem with the server setup and the way CRM messes around with virtual directories?

    Read the article

  • Java - How to detect that the internet connection has got disconnected through a java desktop applic

    - by Yatendra Goel
    I am developing a Java Desktop Application that access internet. It is a multi-threaded application, each thread do the same work (means each thread is an instance of same Thread class). Now, as all the threads need internet connection to be active, there should be some mechanism that detects whether an internet connection is active or not. Q1. How to detect whether the internet connection is active or not? Q2. Where to implement this internet-status-check-mechanism code? Should I start a separate thread for checking internet status regularly and notifies all the threads when the status changes from one state to another? Or should I let each thread check for the internet-status itself? Q3. This issue should be a very common issue as every application accessing an internet should deal with this problem. So how other developers usually deal with this problem? Q4. If you could give me a reference to a good demo application that addresses this issue then it would greatly help me.

    Read the article

  • spring 3 uploadify giving 404 Error

    - by Rajkumar
    I am using Spring 3 and implementing Uploadify. The problem is, the files are updating properly but it is giving HTTP Error 404, on completion of file upload. I tried every possible solution, but none of them works. The files are uploaded. Values are storing in DB properly, only that i am getting HTTP Error 404. Any help is appreciated and Thanks in advance. The JSP Page $(function() { $('#file_upload').uploadify({ 'swf' : 'scripts/uploadify.swf', 'fileObjName' : 'the_file', 'fileTypeExts' : '*.gif; *.jpg; *.jpeg; *.png', 'multi' : true, 'uploader' : '/photo/savePhoto', 'fileSizeLimit' : '10MB', 'uploadLimit' : 50, 'onUploadStart' : function(file) { $('#file_upload').uploadify('settings', 'formData', {'trip_id' :'1', 'trip_name' :'Sample Trip', 'destination_trip' :'Mumbai','user_id' :'1','email' :'[email protected]','city_id' :'12'}); }, 'onQueueComplete' : function(queueData) { console.log('queueData : '+queueData); window.location.href = "trip/details/1"; } }); }); The Controller @RequestMapping(value="photo/{action}", method=RequestMethod.POST) public String postHandler(@PathVariable("action") String action, HttpServletRequest request) { if(action.equals("savePhoto")) { try{ MultipartHttpServletRequest multipartRequest = (MultipartHttpServletRequest)request; MultipartFile file = multipartRequest.getFile("the_file"); String trip_id = request.getParameter("trip_id"); String trip_name = request.getParameter("trip_name"); String destination_trip = request.getParameter("destination_trip"); String user_id = request.getParameter("user_id"); String email = request.getParameter("email"); String city_id = request.getParameter("city_id"); photo.savePhoto(file,trip_id,trip_name,destination_trip,user_id,email,city_id); photo.updatetrip(photo_id,trip_id); }catch(Exception e ){e.printStackTrace();} } return ""; } spring config <bean class="org.springframework.web.multipart.commons.CommonsMultipartResolver" id="multipartResolver"> <property name="maxUploadSize" value="10000000"/> </bean>

    Read the article

  • How can I apply a PSSM efficiently?

    - by flies
    I am fitting for position specific scoring matrices (PSSM aka Position Specific Weight Matrices). The fit I'm using is like simulated annealing, where I the perturb the PSSM, compare the prediction to experiment and accept the change if it improves agreement. This means I apply the PSSM millions of times per fit; performance is critical. In my particular problem, I'm applying a PSSM for an object of length L (~8 bp) at every position of a DNA sequence of length M (~30 bp) (so there are M-L+1 valid positions). I need an efficient algorithm to apply a PSSM. Can anyone help improve performance? My best idea is to convert the DNA into some kind of a matrix so that applying the PSSM is matrix multiplication. There are efficient linear algebra libraries out there (e.g. BLAS), but I'm not sure how best to turn an M-length DNA sequence into a matrix M x 4 matrix and then apply the PSSM at each position. The solution needs to work for higher order/dinucleotide terms in the PSSM - presumably this means representing the sequence-matrix for mono-nucleotides and separately for dinucleotides. My current solution iterates over each position m, then over each letter in word from m to m+L-1, adding the corresponding term in the matrix. I'm storing the matrix as a multi-dimensional STL vector, and profiling has revealed that a lot of the computation time is just accessing the elements of the PSSM (with similar performance bottlenecks accessing the DNA sequence). If someone has an idea besides matrix multiplication, I'm all ears.

    Read the article

  • Advice on a DB that can be uploaded to a website by a smart client for collecting survey feedback

    - by absfabs
    Hello, I'm hoping you can help. I'm looking for a zero config multi-user datbase that my winforms application can easily upload to a webserver folder (together with 1 or 2 classic asp pages) and am looking for some suggestions/recommendations. The idea is that the database will be used to collect feedback entered by people filling in the asp pages. The pages will write to the database using javascript. The database will subsequently be downloaded again for processing once the responses are in. In Summary: It will mostly run in MS Windows environments. I have a modest budget for this and do not mind paying for such a database. No runtime licensing costs. Should be xcopy - Once uploaded to a website folder it should be operational. It should not have a dotnet CLR dependency. It should support a resonable level of concurrent access. Average respondent count would be around 20-30 but one never knows. Should be a reasonable size so that uploads/downloads to and from the site will be reasonably fast. Would appreciate your suggestions/comments Many thanks Abz To clarify - this is a desktop commercial application for feedback management in a vertical market. It uses SQL Server as the backing store. The application currently provides feedback management from email and paper feedback. I now want to add web feedback capability. Getting users to to make their SQL servers accessible to a website is not at option at this time as I am want to make getting up and running as painless as possible. I intend to release a web based implementation of the software in the near future but for now am looking at the above as a pragmatic way to provide web based feedback collection.

    Read the article

  • Lookup site column not saving/storing metadata for Office 2007 documents?

    - by Greg Hurlman
    I'm having this issue on several server environments. We have a list at the site collection root. There is a site column created as a multi-value lookup on that list's Title field. This site column is used in document libraries in subsites as a required field. When we upload anything but an Office 2007 document, the user is presented with the document metadata fill-in screen (EditForm.aspx?Mode=Upload), the user fills in the appropriate data (including picking a value(s) for this lookup), and clicks "check in" - the document is checked in as expected, with the lookup field's value filled in. With an Office 2007 document, this fails. The user selected values for the lookup field do not ever make it to the server - no errors are thrown, but the field is not saved with the document. We have an event listener on these document libraries, and if we inspect the incoming SPListItem on the event listener method before a single line of our code has run, we see that the value for the lookup field is null. It smells like a SharePoint bug to me - but before I go calling Microsoft, has anyone seen this & worked around it? Edit: the only entry I see in the SP trace logs relating to the problem: CMS/Publishing/8ztg/Medium/Got List Item Version, but item was null

    Read the article

  • Simple multilingual CMS?

    - by Christoffer
    Hi, I have been searching for a while now for a dead simple CMS with multi-language support. The ideal candidate is very lean and offers the possibility to set up different languages for different domains. It's OK if the language support is provided by a plugin/extension. For example I want example.com to point to English and example.fr should be French. With different URI-mappings for SEO. It can be developed in either of PHP, Ruby or Python and has to be open source. Any tips? Thank you EDIT / MORE DETAILS What I want is a CMS that is as simple to use and grasp for a client as Radiant is, but with tabs on each resource that can translate articles to different languages. Languages have to be able to use multiple domains, one for each language. I want to easily use the same article for more than one language as well as have articles (e.g. blog posts or news stories) that are only connected to one language. The CMS should be very light in core functionality (like Radiant, unlike Drupal/Joomla) but be easily extendable with plugins.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >