Search Results

Search found 10285 results on 412 pages for 'cpu architecture'.

Page 361/412 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Extremely slow MFMailComposeViewControllerDelegate

    - by Jeff B
    I have a bit of a strange problem. I am trying to send in-app email. I am also using Cocos2d. It works, so far as I get the mail window and I can send mail, but it is extremely slow. It seems to only accept touches every second or so. I checked the cpu usage, and it is quite low. I paused my director, so nothing else should be happening. Any ideas? I am pulling my hair out. I looked at some examples and did the following: Made my scene the mail delegate: @interface MyLayer : CCLayer <MFMailComposeViewControllerDelegate> { ... } And implemented the following function in the scenes: -(void) showEmailWindow: (id) sender { [[CCDirector sharedDirector] pause]; MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; picker.mailComposeDelegate = self; [picker setSubject: @"My subject here"]; NSString *emailBody = @"<h1>Here is my email!</h1>"; [picker setMessageBody:emailBody isHTML:YES]; [myMail presentModalViewController:picker animated:NO]; [picker release]; } I also implemented the mailComposeController, to handle the callback.

    Read the article

  • Compile MonoDevelop 4.2.3

    - by user2942643
    I need help, I'm trying to compile monodevelop code, but when I use the command "./configure" tells me that I need to have installed a version of mono, but I have it installed [raven@localhost ~]$ mono -V Mono JIT compiler version 3.2.8 (tarball Fri May 30 08:15:47 CDT 2014) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen [raven@localhost ~]$ cd /home/raven/Downloads/monodevelop-4.2.3 [raven@localhost monodevelop-4.2.3]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... gnutar checking whether to enable maintainer-specific portions of Makefiles... no checking for mono... /usr/local/bin/mono checking for gmcs... /usr/local/bin/gmcs checking for pkg-config... /usr/bin/pkg-config configure: error: You need mono 3.0.4 or newer [raven@localhost monodevelop-4.2.3]$

    Read the article

  • Loading a RSA private key from memory using libxmlsec

    - by ereOn
    Hello, I'm currently using libxmlsec into my C++ software and I try to load a RSA private key from memory. To do this, I searched trough the API and found this function. It takes binary data, a size, a format string and several PEM-callback related parameters. When I call the function, it just stucks, uses 100% of the CPU time and never returns. Quite annoying, because I have no way of finding out what is wrong. Here is my code: d_xmlsec_dsig_context->signKey = xmlSecCryptoAppKeyLoadMemory( reinterpret_cast<const xmlSecByte*>(data), static_cast<xmlSecSize>(datalen), xmlSecKeyDataFormatBinary, NULL, NULL, NULL ); data is a const char* pointing to the raw bytes of my RSA key (using i2d_RSAPrivateKey(), from OpenSSL) and datalen the size of data. My test private key doesn't have a passphrase so I decided not to use the callbacks for the time being. Has someone already done something similar ? Do you guys see anything that I could change/test to get forward on this problem ? I just discovered the library on yesterday, so I might miss something obvious here; I just can't see it. Thank you very much for your help.

    Read the article

  • python / sqlite - database locked despite large timeouts

    - by Chris Phillips
    Hi, I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0) and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit 4 cpu HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general. I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13mb with 3 equal sized tables, which was about 1 day's worth of data. creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed. Any pointers very much appreciated. Thanks Chris

    Read the article

  • Maintaining a Python web application: heavier vs lighter framework?

    - by Tiberiu Ana
    Five+ years from now, you are hired to support and extend a data-centric web application written in Python that hasn't been kept up to date. Would you rather prefer it was written in the current version of Django/Pylons at the time, using the available standard components, or kept minimal with something like CherryPy/web.py and a few library dependencies? Heavy framework Advantages: standard approach to application design and structure, as encouraged by framework; less application code to worry about. Disadvantages: requires learning the framework to understand how things work; broken things in old version of framework difficult to fix; upgrading to new version potentially difficult due to changing APIs; finding relevant documentation/help potentially difficult due to changing APIs. Light framework Advantages: most application code is directly "visible"; only needed features are implemented; architecture should be simpler to understand; less need to upgrade external dependencies; easier to upgrade external dependencies. Disadvantages: some reinventing the wheel; non-standard design and structure (with the associated unique issues and bugs). I will update the list with any helpful answers.

    Read the article

  • Error displaying a WinForm in Design mode with a custom control on it.

    - by George
    I have a UserControl that is part of a Class library. I reference this project from my solution. This adds a control from the referenced project to my toolbox. I add tghe control to a form. Everything looks good, I compile all and run. Perfect... But when I close the .frm with the control on it and re-open it, I get this error. The code continues to run. It may have something to do with namespaces. The original namespace was simply "Design" and this was ambiguous and conflicting so i decided to rename it. I think that's when my problems began. To prevent possible data loss before loading the designer, the following errors must be resolved: 2 Errors Ignore and Continue Why am I seeing this page? Could not find type 'Besi.Winforms.HtmlEditor.Editor'. Please make sure that the assembly that contains this type is referenced. If this type is a part of your development project, make sure that the project has been successfully built using settings for your current platform or Any CPU. Instances of this error (1) 1. There is no stack trace or error line information available for this error. Help with this error Could not find an associated help topic for this error. Check Windows Forms Design-Time error list Forum posts about this error Search the MSDN Forums for posts related to this error The variable 'Editor1' is either undeclared or was never assigned. Go to code Instances of this error (1) 1. BesiAdmin frmOrder.Designer.vb Line:775 Column:1 Show Call Stack at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.Error(IDesignerSerializationManager manager, String exceptionText, String helpLink) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeExpression(IDesignerSerializationManager manager, String name, CodeExpression expression) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeExpression(IDesignerSerializationManager manager, String name, CodeExpression expression) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeStatement(IDesignerSerializationManager manager, CodeStatement statement) Help with this error MSDN Help Forum posts about this error Search the MSDN Forums for posts related to this error

    Read the article

  • Cloud-aware programming and help choosing a good framework

    - by Shoaibi
    How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware? Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails.... So i would leave this to your opinion as well, suggest me a good free framework based on my needs. Finally thanks for reading the essay ;)

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • How do I compress a Json result from ASP.NET MVC with IIS 7.5

    - by Gareth Saul
    I'm having difficulty making IIS 7 correctly compress a Json result from ASP.NET MVC. I've enabled static and dynamic compression in IIS. I can verify with Fiddler that normal text/html and similar records are compressed. Viewing the request, the accept-encoding gzip header is present. The response has the mimetype "application/json", but is not compressed. I've identified that the issue appears to relate to the MimeType. When I include mimeType="*/*", I can see that the response is correctly gzipped. How can I get IIS to compress WITHOUT using a wildcard mimeType? I assume that this issue has something to do with the way that ASP.NET MVC generates content type headers. The CPU usage is well below the dynamic throttling threshold. When I examine the trace logs from IIS, I can see that it fails to compress due to not finding a matching mime type. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" noCompressionForProxies="false"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/json" enabled="true" /> </staticTypes> </httpCompression>

    Read the article

  • IIS, Web services, Time out error

    - by Eduard
    Hello, We’ve got problem with ASP.NET web application that uses web services of other system. I’ll describe our system architecture: we have web application and Windows services that uses the same web services. - Windows service works all the time and sends information to these web services once an hour. - Web application is designed for users to send the same information in manual behavior. The problem is when user sometimes tries to send information in manual behavior in the web application, .NET throws exception „The operation has timed out” (web?). At that time Windows service successfully sends all necessary information to these web services. IT stuff that supports these web services asserts that there was no any request from our web application at that time. Then we have restarted IIS (iisreset) and everything has started to work fine. This situation repeats all the time. There is no anti-virus or firewall on the server. My suggestion is that there is something wrong with IIS, patches, configuration or whatever? The only specific thing is that there are requests that can least 2 minutes (web service response wait time). We tried to reproduce this situation on our local test servers, but everything works fine. OS: Windows Server 2003 R2 .NET: 3.5

    Read the article

  • when long polling, Why are my other requests taking so long?

    - by Pascal
    The client makes 2 concurrent requests. (1 which takes 60 seconds - long polling) and another which is NOT long polling - supposed to return right away. It does return right away when I'm not doing long polling. But as soon as I start doing long polling with the other thread, the other one takes forever to execute. Firebug shows that the request is waiting for 10-50 seconds. On the server, I profiled ALL requests from the moment the php script starts to the time it goes back to the client, and it shows that each one only took 300ms or less. This problem started about the same time I started doing long polling (with the other XHR requests). I'm using jquery for both requests. The server shows that it is under very light load. CPU and memory less then 2%. 8 processes running out of a pool of 15. (it doesn't seem to deviate much from that number 8, even when I run more ajax requests). I guess each process can run multiple ajax threads concurrently. I made sure to EXIT from all processes as soon as their done executing. I don't see how the process pool has run out, if there are still 7 unused processes listed under prstat -J. Also, the problem happens somewhat intermittently. Firefox should be able to handle 2 concurrent ajax requests. i dont get what the problem is.

    Read the article

  • Better why of looping to detect change.

    - by Dremation
    As of now I'm using a while(true) method to detect changes in memory. The problem with this is it's kill the applications performance. I have a list of 30 pointers that need checked as rapidly as possible for changes, without sacrificing a huge performance loss. Anyone have ideas on this? memScan = new Thread(ScanMem); public static void ScanMem() { int i = addy.Length; while (true) { Thread.Sleep(30000); //I do this to cut down on cpu usage for (int j = 0; j < i; j++) { string[] values = addy[j].Split(new char[] { Convert.ToChar(",") }); //MessageBox.Show(values[2]); try { if (Memory.Scanner.getIntFromMem(hwnd, (IntPtr)Convert.ToInt32(values[0], 16), 32).ToString() != values[1].ToString()) { //Ok, it changed lets do our work //work if (Globals.Working) return; SomeFunction("Results: " + values[2].ToString(), "Memory"); Globals.Working = true; }//end if }//end try catch { } }//end for }//end while }//end void

    Read the article

  • Data sharing amongst JPA Entities

    - by Nick
    Setup: I have a simple web app that has a handfull of forms, each on a separate page. These forms represent patient data. There is a one-to-one relationship between patient and all these forms/entities. Each form maps directly to a db table and a JPA entity, maybe not the best architecture but it works and is simple. Question: If form/entity A and form/entity B share a common chunk of data (one of more fields), what is the best way to handle that in JPA. I.E. - If the data gets inserted via form A, I need it to show up in form B as existing data and vice versa. In other words its logical for both entities to contain that data. I believe I will have to move the common data into its own entity and define the relationships that way, but I have tried many different ways and none gets me all the way, at least with basic JPA. Can this be done through pure JPA relationships or will I have to write a bunch of code to make this happen manually. Not looking for code specifically, just the correct way to model this data. Thanks.

    Read the article

  • Repackaging Jasper-Reports into an application specific OSGi bundle, legal or not?

    - by Chris
    Hi, I wanted to ask (probably a silly) question regarding the packaging of existing open-source components as OSGi bundles (more specifically Jasper Reports). I have an application that I am converting from a monolithic jar-hell type architecture to something more moduler and OSGi is my weapon of choice. There are various modules I have in mind but one of the modules is a reporting module. My own reporting module will be a jar file containing my code that should reference a Jasper Reports bundle. Trouble is, Jasper reports depends on far far too many libraries and is quite monolithic in its own right. I therefore wish to build my own Jasper Reports bundle but this is where I start getting confused about the legality of repackaging. I don't plan to re-compile but I do plan to re-bundle removing known items that I do not require. Can anyone offer advice on whether I am permitted to repackage (not recompile or extend) open-source libraries into OSGi bundles without falling foul of 'derivative works' clause of LGPL? I noticed that Groovy seems to offer some monolithic jars that include all dependancies and actually goes so far as to re-arrange the packages of its dependancies so that there are no namespace conflicts. This seems to me to be a violation of the license but if anyone can reassure me that this is legal then I would feel safer about my less intrusive custom-bundling of Jasper reports. Thanks for your time, Chris

    Read the article

  • Place the business logic in Java Beans?

    - by Lirik
    I was reading this page and I found the following statement: MVC in Java Server Pages Now that we have a convenient architucture to separate the view, how can we leverage that? Java Server Pages (JSP) becomes more interesting because the HTML content can be separated from the Java business objects. JSP can also make use of Java Beans. The business logic could be placed inside Java Beans. If the design is architected correctly, a Web Designer could work with HTML on the JSP site without interfering with the Java developer. Interestingly in my textbook I pulled the following quote: In the MVC architecture... the original request is always handled by a servlet. The servlet invokes the business logic and data access code and creates beans to represent the results (that’s the model). Then, the servlet decides which Java Server Page is appropriate to present those particular results and forwards the request there (the JSP is the view). The servlet decides what business logic code applies and which JSP should present the results (the servlet is the controller). The two statements seem slightly contradicting. What is the best way to use beans: should we place business logic in them or should we only place results in them? Are there ways in which beans are inadequate for representing a model?

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Modularity in Flex

    - by Fernando
    I'm working on a pretty big application for Flex/Air. We are using GraniteDS and Tide to interact with the model from our Java EE server. I've been reading about modularization and Modules in Flex. The application has already been built, and I'm figuring a way out to re-design some classes and parts. From what I've read so far, I understand a Module is a different swf which can be dynamically load. Most of the tutorials/documentation are oriented to Flash "programmers" who are using Flex or Air instead of real developers, so that makes online resources harder to get. What I can't understand - yet - is how to encapsulate ActionScript classes or MXML views under this module. I've separated some of the code into libraries. For example, the generated code from Granite is in a "server" library. But I would like to separate parts of the logic with its Moderators, Controllers and Views. Are modules the way to go? Is there a "modules for dummies" or "head first Flex Modules for programmers" like tutorial in order to get a better perspective in order to build my architecture? When to choose libraries and when to choose modules? I'm using Flex 3.5, and a migration to Flex 4 is way far into the future, so no Flex 4 answers please, thanks!

    Read the article

  • Infinite loop in regex in java

    - by carpediem
    Hello, My purpose is to match this kind of different urls: url.com my.url.com my.extended.url.com a.super.extended.url.com and so on... So, I decided to build the regex to have a letter or a number at start and end of the url, and to have a infinite number of "subdomains" with alphanumeric characters and a dot. For example, in "my.extended.url.com", "m" from "my" is the first class of the regex, "m" from "com" is the last class of the regex, and "y.", "extended." and "url." are the second class of the regex. Using the pattern and subject in the code below, I want the find method to return me a false because this url must not match, but it uses 100% of CPU and seems to stay in an infinite loop. String subject = "www.association-belgo-palestinienne-be"; Pattern pattern = Pattern.compile("^[A-Za-z0-9]\\.?([A-Za-z0-9_-]+\\.?)*[A-Za-z0-9]\\.[A-Za-z]{2,6}"); Matcher m = pattern.matcher(subject); System.out.println(" Start"); boolean hasFind = m.find(); System.out.println(" Finish : " + hasFind); Which only prints: Start I can't reproduce the problem using regex testers. Is it normal ? Is the problem coming from my regex ? Could it be due to my Java version (1.6.0_22-b04 / JVM 64 bit 17.1-b03) ? Thanks in advance for helping.

    Read the article

  • How do people know so much about programming?

    - by Luciano
    I see people in this forums with a lot of points, so I assume they know about a lot of different programming stuff. When I was young I knew about basic (commodore) and the turbo pascal (pc). Then in college I learnt about C, memory management, x86 set, loop invariants, graphs, db query optimization, oop, functional, lambda calculus, prolog, concurrency, polymorphism, newton method, simplex, backtracking, dynamic programming, heuristics, np completeness, LR, LALR, neural networks, static & dynamic typing, turing, godel, and more in between. Then in industry I started with Java several years ago and learnt about it, and its variety of frameworks, and also design patterns, architecture patterns, web development, server development, mobile development, tdd, bdd, uml, use cases, bug trackers, process management, people management if you are a tech lead, profiling, security concerns, etc. I started to forget what I learnt in college... And then there is the stuff I don't know yet, like python, .net, perl, JVM stuff like groovy or scala.. Of course Google is a must for rapid documentation access to know if a problem has been solved already and how, and to keep informed about new stuff by blogs and places like this one. It's just too much or I just have a bad memory.. how do you guys manage it?

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • refactor LINQ TO SQL custom properties that instantiate datacontext

    - by Thiago Silva
    I am working on an existing ASP.NET MVC app that started small and has grown with time to require a good re-architecture and refactoring. One thing that I am struggling with is that we've got partial classes of the L2S entities so we could add some extra properties, but these props create a new data context and query the DB for a subset of data. This would be the equivalent to doing the following in SQL, which is not a very good way to write this query as oppsed to joins: SELECT tbl1.stuff, (SELECT nestedValue FROM tbl2 WHERE tbl2.Foo = tbl1.Bar), tbl1.moreStuff FROM tbl1 so in short here's what we've got in some of our partial entity classes: public partial class Ticket { public StatusUpdate LastStatusUpdate { get { //this static method call returns a new DataContext but needs to be refactored var ctx = OurDataContext.GetContext(); var su = Compiled_Query_GetLastUpdate(ctx, this.TicketId); return su; } } } We've got some functions that create a compiled query, but the issue is that we also have some DataLoadOptions defined in the DataContext, and because we instantiate a new datacontext for getting these nested property, we get an exception "Compiled Queries across DataContexts with different LoadOptions not supported" . The first DataContext is coming from a DataContextFactory that we implemented with the refactorings, but this second one is just hanging off the entity property getter. We're implementing the Repository pattern in the refactoring process, so we must stop doing stuff like the above. Does anyone know of a good way to address this issue?

    Read the article

  • Unit tests for deep cloning

    - by Will Dean
    Let's say I have a complex .NET class, with lots of arrays and other class object members. I need to be able to generate a deep clone of this object - so I write a Clone() method, and implement it with a simple BinaryFormatter serialize/deserialize - or perhaps I do the deep clone using some other technique which is more error prone and I'd like to make sure is tested. OK, so now (ok, I should have done it first) I'd like write tests which cover the cloning. All the members of the class are private, and my architecture is so good (!) that I haven't needed to write hundreds of public properties or other accessors. The class isn't IComparable or IEquatable, because that's not needed by the application. My unit tests are in a separate assembly to the production code. What approaches do people take to testing that the cloned object is a good copy? Do you write (or rewrite once you discover the need for the clone) all your unit tests for the class so that they can be invoked with either a 'virgin' object or with a clone of it? How would you test if part of the cloning wasn't deep enough - as this is just the kind of problem which can give hideous-to-find bugs later?

    Read the article

  • SSL with private key on an HSM

    - by Jason
    I have a client-server architecture in my application that uses SSL. Currently, the private key is stored in CAPI's key store location. For security reasons, I'd like to store the key in a safer place, ideally a hardware signing module (HSM) that is built for this purpose. Unfortunately, with the private key stored on such a device, I can't figure out how to use it in my application. On the server, I am simply using the SslStream class and the AuthenticateAsServer(...) call. This method takes an X509Certificate object that has its private key loaded, but since the private key is stored in a secure (e.g. non exportable) location on the HSM, I don't know how to do this. On the client, I am using an HttpWebRequest object and then using the ClientCertificates property to add my client authentication certificate, but I have the same problem here: how do I get the private key? I know there are some HSMs that act as SSL accelerators but I don't really need an accelerator. Also, these products tend to have special integration with web servers such as IIS and Apache which I'm not using. Any ideas? The only thing I can think of would be to write my own SSL library that would allow me to hand off the signing portion of the transaction to the HSM, but this seems like a huge amount of work.

    Read the article

  • Figuring out the performance limitation of an ADC on a PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a microcontroller like PIC for an analog-to-digital application. This would be preferable to using external A/D chips. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! Here's the simplest example: PIC10F220 is the simplest possible PIC with an ADC. Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet? Any pointers would be much appreciated! AKE

    Read the article

  • Are SharePoint site templates really less performant than site definitions?

    - by Jim
    So, it seems in the SharePoint blogosphere that everybody just copies and pastes the same bullet points from other blogs. One bullet point I've seen is that SharePoint site templates are less performant than site definitions because site definitions are stored on the file system. Is that true? It seems odd that site templates would be less performant. It's my understanding that all site content lives in a database, whether you use a site template or a site definition. A site template is applied once to the database, and from then on the site should not care if the content was created using a site template or not. So, does anybody have an architectural reason why a site template would be less performant than a site definition? Edit: Links to the blogs that say there is a performance difference: From MSDN: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. From DevX: However, user templates in SharePoint can lead to performance problems and may not be the best approach if you're trying to create a set of reusable templates for an entire organization. From IT Footprint: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. Templates in the database are compiled and executed every time a page is rendered. From Branding SharePoint:Custom site definitions hold the following advantages over custom templates: Data is stored directly on the Web servers, so performance is typically better. At a minimum, I think the above articles are incomplete, and I think several are misleading based on what I know of SharePoints architecture. I read another blog post that argued against the performance differences, but I can't find the link.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >