Search Results

Search found 22224 results on 889 pages for 'point of sale'.

Page 681/889 | < Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >

  • Make an ActiveX control work without a form?

    - by Earlz
    We are using Topaz Signature pads. They provide their APIs in the from of an ActiveX control which is to be put on a Winform control. Well, the way our project will work we do not want to have a form(at least not visible). We just want for the signature ActiveX control to get an image in the background. static AxSigPlus sig = new AxSIGPLUSLib.AxSigPlus(); public static void Begin() { ((System.ComponentModel.ISupportInitialize)(sig)).BeginInit(); sig.Name = "sig"; sig.Location = new System.Drawing.Point(0, 0); sig.Size = new System.Drawing.Size(0, 0); sig.Enabled = true; sig.TabletState = 1; //error here sig.SigCompressionMode = 0; } Ok so I get an error at the marked line. The exception is Exception of type 'System.Windows.Forms.AxHost+InvalidActiveXStateException' was thrown. What do I do to solve this problem? Would it just be easier to create a new hidden form and put the control on it so it's invisible?

    Read the article

  • Compiler optimization causing the performance to slow down

    - by aJ
    I have one strange problem. I have following piece of code: template<clss index, class policy> inline int CBase<index,policy>::func(const A& test_in, int* srcPtr ,int* dstPtr) { int width = test_in.width(); int height = test_in.height(); double d = 0.0; //here is the problem for(int y = 0; y < height; y++) { //Pointer initializations //multiplication involving y //ex: int z = someBigNumber*y + someOtherBigNumber; for(int x = 0; x < width; x++) { //multiplication involving x //ex: int z = someBigNumber*x + someOtherBigNumber; if(soemCondition) { // floating point calculations } *dstPtr++ = array[*srcPtr++]; } } } The inner loop gets executed nearly 200,000 times and the entire function takes 100 ms for completion. ( profiled using AQTimer) I found an unused variable double d = 0.0; outside the outer loop and removed the same. After this change, suddenly the method is taking 500ms for the same number of executions. ( 5 times slower). This behavior is reproducible in different machines with different processor types. (Core2, dualcore processors). I am using VC6 compiler with optimization level O2. Follwing are the other compiler options used : -MD -O2 -Z7 -GR -GX -G5 -X -GF -EHa I suspected compiler optimizations and removed the compiler optimization /O2. After that function became normal and it is taking 100ms as old code. Could anyone throw some light on this strange behavior? Why compiler optimization should slow down performance when I remove unused variable ? Note: The assembly code (before and after the change) looked same.

    Read the article

  • Making Firefox render canvas text the same as CSS text

    - by Dan Forys
    I've been experimenting with the canvas tag and Javascript. I've made a page that takes Tweets from the Twitter public timeline and animates them into view. It works by using a canvas element in the background for the animation. When the animation is complete, it creates a div element with the same text over the top. I do this so that the tweet text is selectable and links are clickable. Now, in Safari, Chrome and even Opera, the canvas text and div text look almost exactly the same. Yet in Firefox, the size of the text is different enough to make it 'jump' at the point it changes into the div. Does anyone know how to make Firefox render the text the same on the canvas element and the div using CSS? Or is this a rendering inconsistency with the engine. I have put the page on my website if you want to see what I mean. Now for the code: The CSS I'm using for rendering the div contains: line-height: 21px; font-weight: 100; font-family: Georgia, "New Century Schoolbook", "Nimbus Roman No9 L", serif; font-size: 20px; For rendering on the canvas I'm using: this.context.font = this.scale + 'px Georgia'; this.context.fillStyle = "white"; this.context.strokeStyle = 'white'; this.context.fillText(this.text, 0, 0); this.context.strokeText(this.text, 0, 0); where this.scale is an animated scale factor that finishes at 20px exactly. So, to recap, I'm using the same font and ending up at the same px size, yet Firefox renders the text differently between Canvas and CSS. (edit) Here's a screenshot example: First line is the text animating in using canvas, second line is the resulting div.

    Read the article

  • Can't change pivot table's Access data source - bug in Excel 2000 SP3?

    - by Ron West
    I have a set of Excel 2000 SP3 worksheets that have Pivot Tables that get data from an Access 2000 SP3 database created by a contractor who left our company. Unfortunately, he did all his work on his private area on the company (Novell) network and now that he has left us, the drive spec has been deleted and is invalid. We were able to get the database files restored to our network area by our IT Service Desk people, but we now have to re-link everything to point to our group area instead of the now-nonexistent private area. If I follow the advice given elsewhere on this site (open wizard, click 'Back' to get to 'Step 2 of 3', click 'Get Data...' I get a message that the old filespec is an invalid path and I need to check that the path name is invalid and that I am connected to the server on which the file resides. I then click on OK and get a Login dialog with a 'Database...' button on the right. I click this and get a 'Select Database' dialog which allows me to choose the appropriate database in its correct new location. I then click OK, which takes me back to the 'Login' screen. I can confirm that it has accepted my new location by clicking on 'Database...' as before and the NEW location is still shown. So far so good - but if I then click on OK I get two unhelpful messages - first I get one saying that Excel 'Could not use '|'; file already in use.' - although no other files are in use. Clicking on OK takes me back to the 'Login' dialog. Clicking OK again gives me the same message as before telling me that the OLD filespec is invalid (as if I hadn't changed anything) - but clicking on the 'Database...' button shows that the correct (NEW) database location is still selected. Can anyone tell me a way of using VBA to change the link information without having to spend hours fighting the PivotTable Wizard - preferably similar to this way you update an Access Tabledef:- db.TableDefs(strLinkName).Connect = strNewLink db.TableDefs(strLinkName).RefreshLink Thanks!

    Read the article

  • Large Reports for MSRS

    - by Greg Lorenz
    I have a report that needs to be able to render a very large amount of pages (about 4500 in this instance) in a web browser. The total time needed to finish on the report server from start time to end time is about 30 mins for the instance that I am looking at. Does anyone know what options exist for handling the rendering of such a large report in a web browser? In terms of looking into how this can be resolved I have already performed the following tasks. The report gets its data off of a database table that already has the data flattened to the point that the TimeDataRetrieval on the report server is 17812 or about 18 secs. The report itself has been reformatted to include the least expensive report objects that it can in order to render the data in the correct format. I basically consists of a table with about 4 nested tables and thats it. We were trying to accomplish this on a 2005 report server but continued to run into memory issues that were not feasible for our clients. In response to that we moved this onto a 2008 report server to take advantage of the fact that it uses the file system instead of memory and finally were able to get this to work without running out of the available memory but of course it takes much longer.

    Read the article

  • Debugging a release only flash problem

    - by Fire Lancer
    I've got an Adobe Flash 10 program that freezes in certain cases, however only when running under a release version of the flash player. With the debug version, the application works fine. What are the best approaches to debugging such issues? I considered installing the release player on my computer and trying to set some kind of non-graphical method of output up (I guess there's some way to write a log file or similar?), however I see no way to have both the release and debug versions installed anyway :( . EDIT: Ok I managed to replace my version of flash player with the release version, and no freeze...so what I know so far is: Flash: Debug Release Vista 32: works works XP PRO 32: works* freeze I gave them the debug players I had to test this Hmm, seeming less and less like an error in my code and more like a bug in the player (10.0.45.2 in all cases)... At the very least id like to see the callstack at the point it freezes. Is there some way to do that without requiring them to install various bits and pieces, e.g. by letting flash write out a log.txt or something with a "trace" like function I can insert in the code in question? EDIT2: I just gave the swf to another person with XP 32bit, same results :(

    Read the article

  • Subselecting with MDX

    - by Vince
    Greetings stack overflow community. I've recently started building an OLAP cube in SSAS2008 and have gotten stuck. I would be grateful if someone could at least point me towards the right direction. Situation: Two fact tables, same cube. FactCalls holds information about calls made by subscribers, FactTopups holds topup data. Both tables have numerous common dimensions one of them being the Subscriber dimension. FactCalls             FactTopups SubscriberKey      SubscriberKey CallDuration         DateKey CallCost               Topup Value ... What I am trying to achieve is to be able to build FactCalls reports based on distinct subscribers that have topped up their accounts within the last 7 days. What I am basically looking for an MDX equivalent to SQL's: select * from FactCalls where SubscriberKey in ( select distinct SubscriberKey from FactTopups where ... ); I've tried creating a degenerate dimension for both tables containing SubscriberKey and doing: Exist( [Calls Degenerate].[Subscriber Key].Children, [Topups Degenerate].[Subscriber Key].Children ) Without success. Kind regards, Vince

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • What is the PIXELFORMATDESCRIPTOR parameter in SetPixelFormat() used for?

    - by Mads Elvheim
    Usually when setting up OpenGL contexts, I've simply filled out a PIXELFORMATDESCRIPTOR structure with the necessary information and called ChoosePixelFormat(), followed by a call to SetPixelFormat() with the returned matching pixelformat from ChoosePixelFormat(). Then I've simply passed the initial descriptor without giving much thought of why. But now I use wglChoosePixelFormatARB() instead if ChoosePixelFormat() because I need some extended traits like sRGB and multisampling. It takes an attribute list of integers, just like XLib/GLX on Linux, not a PIXELFORMATDESCRIPTOR structure. So, do I really have to fill in a descriptor for SetPixelFormat() to use? What does SetPixelFormat() use the descriptor for when it already has the pixelformat descriptor index? Why do I have to specify the same pixelformat attributes in two different places? And which one takes precedence; the attribute list to wglChoosePixelFormatARB(), or the PIXELFORMATDESCRIPTOR attributes passed to SetPixelFormat()? Here are the function prototypes, to make the question more clear: /* Finds a best match based on a PIXELFORMATDESCRIPTOR, and returns the pixelformat index */ int ChoosePixelFormat(HDC hdc, const PIXELFORMATDESCRIPTOR *ppfd); /* Finds a best match based on an attribute list of integers and floats, and returns a list of indices of matches, with the best matches at the head. Also supports extended pixelformat traits like sRGB color space, floating-point framebuffers and multisampling. */ BOOL wglChoosePixelFormatARB(HDC hdc, const int *piAttribIList, const FLOAT *pfAttribFList, UINT nMaxFormats, int *piFormats, UINT *nNumFormats ); /* Sets the pixelformat based on the pixelformat index */ BOOL SetPixelFormat(HDC hdc, int iPixelFormat, const PIXELFORMATDESCRIPTOR *ppfd);

    Read the article

  • NAnt not running NUnit tests

    - by ctford
    I'm using NUnit 2.5 and NAnt 0.85 to compile a .NET 3.5 library. Because NAnt 0.85 doesn't support .NET 3.5 out of the box, I've added an entry for the 3.5 framework to NAnt.exe.config. 'MyLibrary' builds, but when I hit the "test" target to execute the NUnit tests, none of them seem to run. [nunit2] Tests run: 0, Failures: 0, Not run: 0, Time: 0.012 seconds Here are the entries in my NAnt.build file for the building and running the tests: <target name="build_tests" depends="build_core"> <mkdir dir="Target" /> <csc target="library" output="Target\Test.dll" debug="true"> <references> <include name="Target\MyLibrary.dll"/> <include name="Libraries\nunit.framework.dll"/> </references> <sources> <include name="Test\**\*.cs" /> </sources> </csc> </target> <target name="test" depends="build_tests"> <nunit2> <formatter type="Plain" /> <test assemblyname="Target\Test.dll" /> </nunit2> </target> Is there some versioning issue I need to be aware of? Test.dll runs fine in the NUnit GUI. The testing dll is definitely being found, because if I move it I get the following error: Failure executing test(s). If you assembly is not build using NUnit 2.2.8.0... Could not load file or assembly 'Test' or one of its dependencies... I would be grateful if anyone could point me in the right direction or describe a similary situation they have encountered. Edit I have since tried it with NAnt 0.86 beta 1, and the same problem occurs.

    Read the article

  • Is there a good tutorial for figuring out what a website is doing so your program can do the same th

    - by brian d foy
    Is there a good guide or tutorial for people who need to programmatically interact with dynamic websites? There's been a rash of Perl questions about that lately, and I haven't found a good resource to point people toward. I'm asking not because I need one but because I don't want to waste my time writing it if it already exists. Although I'm most interested in Perl, the extra tools and techniques are mostly the same. Typically, I see see these problems in people's questions: Handling, setting, and saving cookies Finding and interacting with forms Handling JavaScript inside your user-agent especially things like onLoad, onSumbit, and Ajax Using HTTP sniffer tools Using Web developer plugins in interactive browsers Interacting with DOM, screen scraping, etc. If there's no good tutorial, I'll add it to my list of things to do (unless someone else wants to do it :). Along the way, if you don't have a suggestion for an existing tutorial, please suggest the things that you think should be in a new one, including links, your favorite tools, and your own user-agent development experiences. I don't care about the particular language you use.

    Read the article

  • What are the best open-source software non-profits for making financial contributions and/or facilitating useful work?

    - by Jason S
    I'm not a great programmer myself (my main job is more electrical engineering) and have never really helped out with any open source projects, but I've benefited greatly from free and/or open-source software (MySQL, OpenOffice, Firefox, Apache, PHP, Java, etc.) and at some point would like to make some modest financial contributions to help keep this stuff going. I'm wondering, what are the best non-profits to make financial contributions? I'm aware of: Open Source Initiative (founded 10 years ago by several prominent figures including programmer and "The Cathedral and the Bazaar" author Eric S. Raymond) Free Software Foundation Mozilla Foundation Apache Foundation Anyone have a particular favorite? Ideally I'd like to give money to a non-profit that would foster some of the smaller but promising open-source and/or free software projects. The big projects like Firefox and Apache are already well-established. There are a few small individual shareware programs I've already paid for directly. But it's those middle-ground projects that I would really like my contributions to support. (one that comes to mind is a good GUI for Subversion or Mercurial.) It's one thing for a single person to donate a little $$ to a small project. It's another for a foundation or something to give larger grants to projects that give a good bang for the buck. Conservation organizations like The Nature Conservancy, or the Trust for Public Lands, have really honed this approach, but I'm not really sure if there's an equivalent model in software-land.

    Read the article

  • Collection is empty when it arrives on the client

    - by digiduck
    One of my entities has an EntitySet< property with [Composition], [Include] and [Association] attributes. I populate this collection in my domain service but when I check its contents when it is received on the client, the collection is empty. I am using Silverlight 4 RTM as well as RIA Services 1.0 RTM. Any ideas what I am doing wrong? Here is the code on my service side: public class RegionDto { public RegionDto() { Cities = new EntitySet<CityDto>(); } [Key] public int Id { get; set; } public string Name { get; set; } [Include] [Composition] [Association("RegionDto_CityDto", "Id", "RegionId")] public EntitySet<CityDto> Cities { get; set; } } public class CityDto { [Key] public int Id { get; set; } public int RegionId { get; set; } public string Name { get; set; } } [EnableClientAccess()] public class RegionDomainService : LinqToEntitiesDomainService<RegionEntities> { public IEnumerable<RegionDto> GetRegions() { var regions = (ObjectContext.Regions .Select(x => new RegionDto { Id = x.ID, Name = x.Name })).ToList(); foreach (var region in regions) { var cities = (ObjectContext.Cities .Where(x => x.RegionID == region.Id) .Select(x => new CityDto { Id = x.ID, Name = x.Name })).ToList(); foreach (var city in cities) { region.Cities.Add(city); } } // each region's Cities collection is populated at this point // however when the client receives it, the Cities collections are all empty return regions; } }

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • Find the flaws in the concept...

    - by Trindaz
    A web based web browser. Sounds silly right? Here's a use case. All comments about what could go wrong, and if anyone has tried and failed at this, very much wanted User goes to www.theBrowser.com and logs in with credentials specific to theBrowser.com. User tells theBrowser what their username and password for various sites are User goes to theBrowser.com/?uri=somesite.com theBrowser sends off the http request with User's log in details, then sends the http response back to User. This lets theBrowser do weird and wonderful functions like change colours / style sheets / etc. to every site that gets passed through it. From a technical stand point, storing username and password and passing them along is not a challenge for one user, but if there were a few, I'd have to use some kind of server based browser software to store a session per user logged in at theBrowser.com. How could I do that? Will I have to start from scratch? Obviously privacy and security are issues. Would theBrowser.com be too great a risk, even if users are fully warned? Cheers, Dave

    Read the article

  • C# Entity FrameWork MySQL Slow Queries Count()

    - by Matthew M.
    Hello, I'm having a serious issue with MySQL and Entity Framework 4.0. I have dropped a Table onto the EF Designer surface, and everything seems OK. However, when I perform a query in the following fashion: using(entityContext dc = new entityContext()) { int numRows = dc.myTable.Count(); } The query that is generated looks something like this: SELECT `GroupBy1`.`A1` AS `C1` FROM (SELECT Count(1) AS `A1` FROM (SELECT `pricing table`.`a`, `pricing table`.`b`, `pricing table`.`c`, `pricing table`.`d`, `pricing table`.`e`, `pricing table`.`f`, `pricing table`.`g`, `pricing table`.`h`, `pricing table`.`i` FROM `pricing table` AS `pricing table`) AS `Extent1`) AS `GroupBy1` As should be evident, this is an excruciatingly unoptimized query. It is selecting every single row! This is not optimal, nor is it even possible for me to use MySQL + EF at this point. I have tried both the MySQL 6.3.1 [that was fun to install] and DevArt's dotConnect for MySQL and both produce the same results. This table has 1.5 million records.. and takes 6-11s to execute! What am I doing wrong ? Is there any way to optimize this [and other queries] to produce sane code like: SELECT COUNT(*) FROM table ? Generating the same query using SQLServer takes virtually no time and produces sane code. Help! Thanks! Matthew

    Read the article

  • Long running stateful service in .NET

    - by Asaf R
    Hi, I need to create a service in .NET that maintains (inner) state in-memory, spawns multiple threads and is generally long-running. There are a lot options - Good-old Windows Service Windows Communication Services Windows Workflow Foundation I really don't know which to choose. Most of the functionality is in a library used by this service, so the service itself is rather simple. On one hand, it's important the service host is as close to "simply working" as possible, which excludes Windows Service. On the other hand, it's important that the service is not taken down by the host just because there's no external activity, which makes WCF kind o' "scary". As for WF, it's strongest selling point is the ability to create processes as, um..., workflows, which is something I don't need nor want. To sum it up, the plethora of Microsoft technologies got me a bit confused. I'd appreciate help regarding the pros and cons of each solution (or other's I've failed to mention) for the problem of a stateful, long running service in .NET Thanks, Asaf P.S., I'm using .NET 4. EDIT: What I mean by the host "simply working" is, for example, that the service I create be reactivated if it crashes. I guess the reason for this question is that I've created Windows Services in the past (I think it was in plain C++ with Win32 API), and I don't want to miss out on something simpler if there's is such as thing. Thanks for all the replies thus far! Asaf.

    Read the article

  • PostgreSQL: Rolling back a transaction within a plpgsql function?

    - by jamieb
    Coming from the MS SQL world, I tend to make heavy use of stored procedures. I'm currently writing an application uses a lot of PostgreSQL plpgsql functions. What I'd like to do is rollback all INSERTS/UPDATES contained within a particular function if I get an exception at any point within it. I was originally under the impression that each function is wrapped in it's own transaction and that an exception would automatically rollback everything. However, that doesn't seem to be the case. I'm wondering if I ought to be using savepoints in combination with exception handling instead? But I don't really understand the difference between a transaction and a savepoint to know if this is the best approach. Any advice please? CREATE OR REPLACE FUNCTION do_something( _an_input_var int ) RETURNS bool AS $$ DECLARE _a_variable int; BEGIN INSERT INTO tableA (col1, col2, col3) VALUES (0, 1, 2); INSERT INTO tableB (col1, col2, col3); VALUES (0, 1, 'whoops! not an integer'); -- The exception will cause the function to bomb, but the values -- inserted into "tableA" are not rolled back. RETURN True; END; $$ LANGUAGE plpgsql;

    Read the article

  • glitchy stuttery iphone game loop

    - by Adam
    This is a problem I've been trying to solve for a few days now, and I've looked at the various solutions on stackoverflow and nothing has really seemed to work for me. I'm making an iPhone game with OpenGLES graphics and accelerometer input, at this point it's very simple, but the rendering is already pretty bad... it stutters and seems to jump back or forward in time. It doesn't happen a lot, but it happens enough to be a problem. I mean, who wants to play a game where a bullet gets magically transported into the player, and then it's game over? No one. I've tried using NSTimer for the game loop, I've tried using a separate thread (with a frame rate and continuously) I've tried using different frame rates, from 30FPS to 60FPS (It seems to have a max frame rate around 45FPS, but no problems at 30FPS) I've tried using timeIntervalSince1970 and CFGetAbsoluteTime to measure loop time, with no noticeable diffence Anyone have any ideas on what is the best way to get this looking better? One of the posts I've read suggested running the simulation at a fixed frame rate and then just render as fast as possible, does that seem like a good idea?

    Read the article

  • Decrypting a string in C# which was encrypted with openssl in php 5.3.2

    - by panny
    maybe someone can clear me up. I have been surfing on this a while now. I used openssl from console to create a root certificate for me (privatekey.pem, publickey.pem, mycert.pem, mycertprivatekey.pfx). See the end of this text on how. The problem is still to get a string encrypted on the PHP side to be decrypted on the C# side with RSACryptoServiceProvider. Any ideas? PHP side I used the publickey.pem to read it into php: $server_public_key = openssl_pkey_get_public(file_get_contents("C:\publickey.pem")); // rsa encrypt openssl_public_encrypt("123", $encrypted, $server_public_key); and the privatekey.pem to check if it works: openssl_private_decrypt($encrypted, $decrypted, openssl_get_privatekey(file_get_contents("C:\privatekey.pem"))); Coming to the conclusion, that encryption/decryption works fine on the php side with these openssl root certificate files. C# side In same manner I read the keys into a .net C# console program: X509Certificate2 myCert2 = new X509Certificate2(); RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(); try { myCert2 = new X509Certificate2(@"C:\mycertprivatekey.pfx"); rsa = (RSACryptoServiceProvider)myCert2.PrivateKey; } catch (Exception e) { } string t = Convert.ToString(rsa.Decrypt(rsa.Encrypt(test, false), false)); coming to the point, that encryption/decryption works fine on the c# side with these openssl root certificate files. key generation on unix 1) openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout privatekey.pem -out mycert.pem 2) openssl rsa -in privatekey.pem -pubout -out publickey.pem 3) openssl pkcs12 -export -out mycertprivatekey.pfx -in mycert.pem -inkey privatekey.pem -name "my certificate"

    Read the article

  • WPF - How to properly reference a class from XAML

    - by Andy T
    OK, this is a super super noob question, one that I'm almost embarrassed to ask... I want to reference a class in my XAML file. It's a DataTemplateSelector for selecting the right edit template for a DataGrid column. Anyway, I've written the class into my code behind, added the local namespace to the top of top of the XAML, but when I try to reference the class from the XAML, it tells me the class does not exist in the local namespace. I must be missing something really really simple but I just can't understand it... Here's my code. XAML: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:tk="http://schemas.microsoft.com/wpf/2008/toolkit" xmlns:local="clr-namespace:CustomFields" xmlns:col="clr-namespace:System.Collections;assembly=mscorlib" xmlns:sys="clr-namespace:System;assembly=mscorlib" x:Class="CustomFields.MainWindow" x:Name="Window" Title="Define Custom Fields" Width="425" Height="400" MinWidth="425" MinHeight="400"> <Window.Resources> <ResourceDictionary> <local:RangeValuesEditTemplateSelector> blah blah blah... </local:RangeValuesEditTemplateSelector> </ResourceDictionary> </Window.Resources> C#: namespace CustomFields { /// /// Interaction logic for MainWindow.xaml /// public partial class MainWindow : Window { public MainWindow() { this.InitializeComponent(); // Insert code required on object creation below this point. } } public class RangeValuesEditTemplateSelector : DataTemplateSelector { public RangeValuesEditTemplateSelector(){ MessageBox.Show("hello"); } } } Any ideas what I'm doing wrong? I thought this should be simple as 1-2-3... Thanks! AT

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • boost::binding that which is already bound

    - by PaulH
    I have a Visual Studio 2008 C++ application that does something like this: template< typename Fcn > inline void Bar( Fcn fcn ) // line 84 { fcn(); }; template< typename Fcn > inline void Foo( Fcn fcn ) { // this works fine Bar( fcn ); // this fails to compile boost::bind( Bar, fcn )(); }; void main() { SYSTEM_POWER_STATUS_EX status = { 0 }; Foo( boost::bind( ::GetSystemPowerStatusEx, &status, true ) ); // line 160 } *The call to GetSystemPowerStatusEx() is just for demonstration. Insert your favorite call there and the behavior is the same. When I go to compile this, I get 84 errors. I won't post them all unless asked, but they start with this: 1>.\MyApp.cpp(99) : error C2896: 'boost::_bi::bind_t<_bi::dm_result<MT::* ,A1>::type,boost::_mfi::dm<M,T>,_bi::list_av_1<A1>::type> boost::bind(M T::* ,A1)' : cannot use function template 'void Bar(Fcn)' as a function argument 1> .\MyApp.cpp(84) : see declaration of 'Bar' 1> .\MyApp.cpp(160) : see reference to function template instantiation 'void Foo<boost::_bi::bind_t<R,F,L>>(Fcn)' being compiled 1> with 1> [ 1> R=BOOL, 1> F=BOOL (__cdecl *)(PSYSTEM_POWER_STATUS_EX,BOOL), 1> L=boost::_bi::list2<boost::_bi::value<_SYSTEM_POWER_STATUS_EX *>,boost::_bi::value<bool>>, 1> Fcn=boost::_bi::bind_t<BOOL,BOOL (__cdecl *)(PSYSTEM_POWER_STATUS_EX,BOOL),boost::_bi::list2<boost::_bi::value<_SYSTEM_POWER_STATUS_EX *>,boost::_bi::value<bool>>> 1> ] If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • How to import data to SAP

    - by Mehmet AVSAR
    Hi, As a complete stranger in town of SAP, I want to transfer my own application's (mobile salesforce automation) data to SAP. My application has records of customers, stocks, inventory, invoices (and waybills), cheques, payments, collections, stock transfer data etc. I have an additional database which holds matchings of records. ie. A customer with ID 345 in my application has key 120-035-0223 in SAP. Every record, for sure, has to know it's counterpart, including parameters. After searching Google and SAP help site for a day, I covered that it's going to be a bit more pain than I expected. Especially SAP site does not give even a clue on it. Say I couldn't find. We transferred our data to some other ERP systems, some of which wanted XML files, some other exposed their APIs. My point is, is Sql Server's SSIS an option for me? I hope it is, so I can fight on my own territory. Since client requests would vary a lot, I count flexibility as most important criteria. Also, I want to transfer as much data as I could. Any help is appreciated. Regards,

    Read the article

  • MVC 2 Data annotations problem

    - by Louzzzzzz
    Hi. Going mad now. I have a MVC solution that i've upgraded from MVC 1 to 2. It all works fine.... except the Validation! Here's some code: In the controller: using System; using System.Collections.Generic; using System.Globalization; using System.Linq; using System.Security.Principal; using System.Web; using System.Web.Mvc; using System.Web.Security; using System.Web.UI; using MF.Services.Authentication; using System.ComponentModel; using System.ComponentModel.DataAnnotations; namespace MF.Controllers { //basic viewmodel public class LogOnViewData { [Required] public string UserName { get; set; } [Required] public string Password { get; set; } } [HandleError] public class AccountController : Controller { [HttpPost] public ActionResult LogOn(LogOnViewData lvd, string returnUrl) { if (ModelState.IsValid) { //do stuff - IsValid is always true } } } } The ModelState is always valid. The model is being populated correctly however. Therefore, if I leave both username and password blank, and post the form the model state is still valid. Argh! Extra info: using structure map for IoD. Previously, before upgrading to MVC 2 was using the MS data annotation library so had this in my global.asax.cs: ModelBinders.Binders.DefaultBinder = new Microsoft.Web.Mvc.DataAnnotations.DataAnnotationsModelBinder(); Have removed that now. I'm sure i'm doing something really basic and wrong. If someone could point it out that would be marvellous. Cheers

    Read the article

< Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >