Search Results

Search found 3307 results on 133 pages for 'distributed datasource'.

Page 110/133 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • System.Web.Caching vs. Enterprise Library Caching Block

    - by ESV
    For a .NET component that will be used in both web applications and rich client applications, there seem to be two obvious options for caching: System.Web.Caching or the Ent. Lib. Caching Block. What do you use? Why? System.Web.Caching Is this safe to use outside of web apps? I've seen mixed information, but I think the answer is maybe-kind-of-not-really. a KB article warning against 1.0 and 1.1 non web app use The 2.0 page has a comment that indicates it's OK: http://msdn.microsoft.com/en-us/library/system.web.caching.cache(VS.80).aspx Scott Hanselman is creeped out by the notion The 3.5 page includes a warning against such use Rob Howard encouraged use outside of web apps I don't expect to use one of its highlights, SqlCacheDependency, but the addition of CacheItemUpdateCallback in .NET 3.5 seems like a Really Good Thing. Enterprise Library Caching Application Block other blocks are already in use so the dependency already exists cache persistence isn't necessary; regenerating the cache on restart is OK Some cache items should always be available, but be refreshed periodically. For these items, getting a callback after an item has been removed is not very convenient. It looks like a client will have to just sleep and poll until the cache item is repopulated. Memcached for Win32 + .NET client What are the pros and cons when you don't need a distributed cache?

    Read the article

  • nth ugly number

    - by Anil Katti
    Numbers whose only prime factors are 2, 3 or 5 are called ugly numbers. Example: 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, ... 1 can be considered as 2^0. I am working on finding nth ugly number. Note that these numbers are extremely sparsely distributed as n gets large. I wrote a trivial program that computes if a given number is ugly or not. For n 500 - it became super slow. I tried using memoization - observation: ugly_number * 2, ugly_number * 3, ugly_number * 5 are all ugly. Even with that it is slow. I tried using some properties of log - since that will reduce this problem from multiplication to addition - but, not much luck yet. Thought of sharing this with you all. Any interesting ideas? Using a concept similar to "Sieve of Eratosthenes" (thanks Anon) for (int i(2), uglyCount(0); ; i++) { if (i % 2 == 0) continue; if (i % 3 == 0) continue; if (i % 5 == 0) continue; uglyCount++; if (uglyCount == n - 1) break; } i is the nth ugly number. Even this is pretty slow. I am trying to find 1500th ugly number.

    Read the article

  • wcf configuration for this code

    - by user208081
    I have the following code and would like to convert a lot of code into configuration settings for WCF. As you can see, the code is using wshttpbinding. I appreciate any help on this. try { // Provides a unique network address that a client uses to communicate with a service endpoint. EndpointAddress endpointAddress = new EndpointAddress(new Uri(FAXServiceSettings.Default.FAXReceiveServiceURL)); // Specify the protocols, transports, and message encoders used for communication between the client and the service. // WSHttpBinding represents an interoperable binding that supports distributed transactions and secure, reliable sessions. // Spefically, SOAP message security is enabled for secure transmission of the message content. WSHttpBinding clientBinding = new WSHttpBinding(SecurityMode.Message); clientBinding.OpenTimeout = TimeSpan.FromSeconds(FAXServiceSettings.Default.FAXReceiveServiceOpenTimeoutInSeconds); clientBinding.SendTimeout = TimeSpan.FromSeconds(FAXServiceSettings.Default.FAXReceiveServiceOpenTimeoutInSeconds); // Use the ChannelFactory to enable the creation of channels to the binding and endpoint. using (ChannelFactory<IReceiveFAX> channelFactory = new ChannelFactory<IReceiveFAX>(clientBinding, endpointAddress)) { // Creates a channel of a specified type to a specified endpoint address. IReceiveFAX channel = channelFactory.CreateChannel(); if (channel != null) { try { // Submit the FaxSchedule instance for routing. channel.SubmitFAXForRouting(CreateNewFaxScheduleContainerInstance()); // Explicitly close the channel using the IClientChannel interface. CloseChannel((channel as IClientChannel)); } finally { // Explicitly dispose of the channel using IDisposable interface. DisposeOfChannel((channel as IDisposable)); channel = null; } } // This method causes a CommunicationObject to gracefully transition from any state, other than the Closed state, into the Closed state. The Close method allows any // unfinished work to be completed before returning. For example, finish sending any buffered messages. channelFactory.Close(); } } catch { throw; } Pratik

    Read the article

  • Problem with Tapestry palette's arrow icons in IE8

    - by JellyHead
    I'm using Tapestry to create pages for a web app, and have been using the palette component to add/delete items to/from a group. The page looks great in Firefox (Tapestry seems biased towards Firefox), but my customers will all be using Internet Explorer (any versions from 6, 7, & 8) and in IE8, the disabled arrow buttons look awful. In Firefox, they are faded, using an opacity setting of 25%, but this doesn't work in IE8 and instead you get a faded image with an ugly black border around the image. In tapestry-core's stylesheet (default.css), you have the following for a disabled arrow button. DIV.t-palette-controls BUTTON[disabled] IMG { filter: alpha(opacity = 25); -moz-opacity: .25; } These are clearly out of date, as -moz-opacity is no longer supported by Firefox (use opacity: 25 instead). The problem is with filter: "alpha(opacity = 25);". If I remove this, the arrows look fine in IE8, but they are not faded. I got the magic instruction: -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(opacity=25)"; from various websites, but putting this in does not work either - the arrow icons are ugly again. The icon itself (distributed with Tapestry) just seems to be a regular PNG, but I'm not an expert on image formats, so maybe there's a problem there? Anyone else had this problem?

    Read the article

  • SVN Workflow - Chicken Before the Egg - Before merging V1 with V2, I need code from V1 to work on V2

    - by Jake
    Hi, Our distributed team (3 internal devs and 3+ external devs) use SVN to manage our codebase for a web site. We have a branch for each minor version (4.1.0, 4.1.1, 4.1.2, etc...). We have a trunk which we merge each version into when we do a release and publish to our site. An example of the problem we are having is thus: A new feature is added, lets call it "Ability to Create A Project" to 4.1.1. Another feature which depends on the one in 4.1.1 is scheduled to go in 4.1.2, called "Ability to Add Tasks to Projects". So, on Monday, we say 4.1.1 is 'closed' and needs to be tested. Our remote developers usually will start working on features/tickets for 4.1.2 at this point. Throughout the week we will test 4.1.1 and fix any bugs and commit them back to 4.1.1. Then, on Friday or so, we will tag 4.1.1, merge it with trunk, and finally, merge it with 4.1.2. But, for the 4-5 days we are testing, 4.1.2 doesn't have the code from 4.1.1 that some of the new features for 4.1.2 depend on. So a dev who is adding the "Ability to Add Tasks To Projects" feature, doesn't have the "Ability to Create a Project" feature to build upon, and has to do some file copy shenanigans to be able to keep working on it. What could/should we do to smooth out this process? P.S. Apologies if this question has been asked before - I did search but couldn't find what I'm looking for.

    Read the article

  • Converting from CVS to SVN to Hg, does 'hg convert' require pointing to an SVN checkout or just a re

    - by Terry
    As part of migrating full CVS history to Hg, I've used cvs2svn to create an SVN repo in a local directory. It's first level directory structure is: 2010-04-21 09:39 AM <DIR> . 2010-04-21 09:39 AM <DIR> .. 2010-04-21 09:39 AM <DIR> locks 2010-04-21 09:39 AM <DIR> hooks 2010-04-21 09:39 AM <DIR> conf 2010-04-21 09:39 AM 229 README.txt 2010-04-21 11:45 AM <DIR> db 2010-04-21 09:39 AM 2 format 2 File(s) 231 bytes After setting up hg and the convert extension and attempting the convert, I get the following on convert: C:\>hg convert file://localhost/Users/terry/Desktop/repoSVN assuming destination repoSVN-hg initializing destination repoSVN-hg repository file://localhost/Users/terry/Desktop/repoSVN does not look like a CVS checkout file://localhost/Users/terry/Desktop/repoSVN does not look like a Git repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Subversion repo file://localhost/Users/terry/Desktop/repoSVN is not a local Mercurial repo file://localhost/Users/terry/Desktop/repoSVN does not look like a darcs repo file://localhost/Users/terry/Desktop/repoSVN does not look like a monotone repo file://localhost/Users/terry/Desktop/repoSVN does not look like a GNU Arch repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Bazaar repo file://localhost/Users/terry/Desktop/repoSVN does not look like a P4 repo abort: file://localhost/Users/terry/Desktop/repoSVN: missing or unsupported repository I have TortoiseHg installed. For info, hg version reports: Mercurial Distributed SCM (version 1.4.3) This version of Mercurial seems to have some svn bindings if library.zip in the install is to be believed. Do I need to do a checkout and point hg convert to it for this to work properly?

    Read the article

  • Maven: compile aspectj project containing Java 1.6 source

    - by gmale
    What I want to do is fairly easy. Or so you would think. However, nothing is working properly. Requirement: Using maven, compile Java 1.6 project using AspectJ compiler. Note: Our code cannot compile with javac. That is, it fails compilation if aspects are not woven in (because we have aspects that soften exceptions). Questions (based on failed attempts below): Either 1) How do you get maven to run the aspectj:compile goal directly, without ever running compile:compile? 2) How do you specify a custom compilerId that points to your own ajc compiler? Thanks for any and all suggestions. These are the things I've tried that have let to my problem/questions: Attempt 1 (fail): Specify aspectJ as the compiler for the maven-compiler-plugin: org.apache.maven.plugins maven-compiler-plugin 2.2 1.6 1.6 aspectj org.codehaus.plexus plexus-compiler-aspectj 1.8 This fails with the error: org.codehaus.plexus.compiler.CompilerException: The source version was not recognized: 1.6 No matter what version of the plexus compiler I use (1.8, 1.6, 1.3, etc), this doesn't work. I actually read through the source code and found that this compiler does not like source code above Java 1.5. Attempt 2 (fail): Use the aspectJ-maven-plugin attached to the compile and test-compile goals: org.codehaus.mojo aspectj-maven-plugin 1.3 1.6 1.6 compile test-compile This fails when running either: mvn clean test-compile mvn clean compile because it attempts to execute compile:compile before running aspectj:compile. As noted above, our code doesn't compile with javac--the aspects are required. So mvn would need to skip the compile:compile goal altogether and run only aspectj:compile. Attempt 3 (works but unnacceptable): Use the same configuration above but instead run: mvn clean aspectj:compile This works, in that it builds successfully but it's unacceptable in that we need to be able to run the compile goal and the test-compile goal directly (m2eclipse auto-build depends on those goals). Moreover, running it this way would require that we spell out every goal we want along the way (for instance, we need resources distributed and tests to be run and test resources deployed, etc)

    Read the article

  • Why is TransactionScope operation is not valid?

    - by Cragly
    I have a routine which uses a recursive loop to insert items into a SQL Server 2005 database The first call which initiates the loop is enclosed within a transaction using TransactionScope. When I first call ProcessItem the myItem data gets inserted into the database as expected. However when ProcessItem is called from either ProcessItemLinks or ProcessItemComments I get the following error. “The operation is not valid for the state of the transaction” I am running this in debug with VS 2008 on Windows 7 and have the MSDTC running to enable distributed transactions. The code below isn’t my production code but is set out exactly the same. The AddItemToDatabase is a method on a class I cannot modify and uses a standard ExecuteNonQuery() which creates a connection then closes and disposes once completed. I have looked at other posting on here and the internet and still cannot resolve this issue. Any help would be much appreciated. using (TransactionScope processItem = new TransactionScope()) { foreach (Item myItem in itemsList) { ProcessItem(myItem); } processItem.Complete(); } private void ProcessItem(Item myItem) { AddItemToDatabase(myItem); ProcessItemLinks(myItem); ProcessItemComments(myItem); } private void ProcessItemLinks(Item myItem) { foreach (Item link in myItem.Links) { ProcessItem(link); } } private void ProcessItemComments(Item myItem) { foreach (Item comment in myItem.Comments) { ProcessItem(comment); } } Here is top part of the stack trace. Unfortunatly I cant show the build up to this point as its company sensative information which I can not disclose. Hope this is enough information. at System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) at System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open()

    Read the article

  • Parsing string, with Boost Spirit 2, to fill data in user defined struct

    - by Surya
    I'm using Boost.Spirit which was distributed with Boost-1.42.0 with VS2005. My problem is like this. I've this string which was delimted with commas. The first 3 fields of it are strings and rest are numbers. like this. String1,String2,String3,12.0,12.1,13.0,13.1,12.4 My rule is like this qi::rule<string::iterator, qi::skip_type> stringrule = *(char_ - ',') qi::rule<string::iterator, qi::skip_type> myrule= repeat(3)[*(char_ - ',') >> ','] >> (double_ % ',') ; I'm trying to store the data in a structure like this. struct MyStruct { vector<string> stringVector ; vector<double> doubleVector ; } ; MyStruct var ; I've wrapped it in BOOST_FUSION_ADAPT_STRUCTURE to use it with spirit. BOOST_FUSION_ADAPT_STRUCT (MyStruct, (vector<string>, stringVector) (vector<double>, doubleVector)) My parse function parses the line and returns true and after qi::phrase_parse (iterBegin, iterEnd, myrule, boost::spirit::ascii::space, var) ; I'm expecting var.stringVector and var.doubleVector are properly filled. but it is not the case. What is going wrong ? Thanks in advance, Surya

    Read the article

  • Non-Relational Database Design

    - by Ian Varley
    I'm interested in hearing about design strategies you have used with non-relational "nosql" databases - that is, the (mostly new) class of data stores that don't use traditional relational design or SQL (such as Hypertable, CouchDB, SimpleDB, Google App Engine datastore, Voldemort, Cassandra, SQL Data Services, etc.). They're also often referred to as "key/value stores", and at base they act like giant distributed persistent hash tables. Specifically, I want to learn about the differences in conceptual data design with these new databases. What's easier, what's harder, what can't be done at all? Have you come up with alternate designs that work much better in the non-relational world? Have you hit your head against anything that seems impossible? Have you bridged the gap with any design patterns, e.g. to translate from one to the other? Do you even do explicit data models at all now (e.g. in UML) or have you chucked them entirely in favor of semi-structured / document-oriented data blobs? Do you miss any of the major extra services that RDBMSes provide, like relational integrity, arbitrarily complex transaction support, triggers, etc? I come from a SQL relational DB background, so normalization is in my blood. That said, I get the advantages of non-relational databases for simplicity and scaling, and my gut tells me that there has to be a richer overlap of design capabilities. What have you done? FYI, there have been StackOverflow discussions on similar topics here: the next generation of databases changing schemas to work with Google App Engine choosing a document-oriented database

    Read the article

  • How to completely wipe a previous ClickOnce installation?

    - by Dabblernl
    I have a curious problem: My app is distributed through ClickOnce. I recently installed three new clients on a new location. They worked. After an update however, all old clients worked fine, but the three new clients did not. As my code is swallowing an exception somewhere I have been unable thusfar to pinpoint where the error lies. When I XCopy the latest version of the app to the desktop of the three new client computers the program works fine. So, I thought uninstalling and reinstalling the program from the download location should fix the problem, but it does not! I can think of two explanations: The new location has some firewall/virusscanner in place that doesn't like the latest version of my app when it is run from a standard ClickOnce directory, but it allows execution from the desktop. Some old settings (the app uses user scoped and app scoped settings) remain in effect after the uninstall. When I find and check the user.config file for the app however, I find no incorrect setttings there. Thusfar, I have been unable to reproduce the error on any other machine. How can I solve this!?

    Read the article

  • How to build n-layered web architecture with PHP?

    - by Alex
    I have a description and design of a website and I need to redesign it to allow for new requirements. The website's purpose is the offering of government's contracts and bidding opportunities for different businesses.I'm dealing with the 3-tier architecture PHP website comprising of the user-interface tier(client's web browser),business logic layer(Apache web server with PHP engine in it and a couple of applications running within a web server as well) and a database layer(local mysql database). Now,i need to redesign it to su???rt distributed n-tier architecture and specify how I would go about it.After long hours of research i came to this solution: business logic should be separated into presentation and purely business logic tier to allow for n-layer architecture(user-interface,presentation tier,b.logic and data tier).I have decided to use ??? just for the presentation(since the original existing website is in PHP) and use it within apache web server.In the business logic i want to use J2?? implementation technology instead of implementing it in PHP(i.e using Zend app.server and smarty template) cz J2EE can provide much more essential container services which are essential for business logic,its robustness,maintainability and different critical business operations which will be carried out by the g?v?rnment's website.So,particularly,i want to use J??ss app.server with ?J? business objects in it which would provide all the b.logic in java and would interact with the database and so forth.In order to connect PHP on a web server with java on app.server i'm gonna use PHP/Java bridge API (or maybe Quercus or SOAP is better?).Finally,i have my data tier with mysql which will communicate with b.logic via JD??.Payment system application in ???.server is gonna use S??? to talk with credit card company. From your professional point of view,does it sound like a good way of redesigning the original website to allow for n-tier architecture considering the specifics of the website and the criticality of its operations?(payment system is included in it)or u would personally prefer to use PHP business objects for business logic as well instead of J2EE?If you have any wiser recommendation or some alternative,please let me know what is right or wrong in my current solution. H??? to hear your professional advice very s??n (I'm new to the area of web development) Thanks in advance

    Read the article

  • Which Database to choose?

    - by Sundar
    I have the following criteria Database should be protected with a username and password. It should not be possible to copy the database file and use it else were like MS Access. There will be no central database server. Each machine will run their own database server locally and user will initiate synchronization. Concept is inspired from distributed version control system like Git. So it should have good replication support. Strong consistency is not needed. Users will synchronize each other database when they need. In case of conflicts it should be possible to find the conflict and present it (from application) to the user for fixing it. Revisions of data if available it will be good. e.g. Entire history of change to a invoice. I explored document oriented database and inclined towards the same. But I dont know what to choose. Database is small it will not reach even 1GB in the next few years (say 3 years). Please feel free to suggest any database which you think might be suitable. Any pointers is highly appreciated. Thanks in advance.

    Read the article

  • Has Twisted changed its dependencies?

    - by cdecker
    Hi all, I'm currently working on a Python/Twisted project which is to be distributed and tested on Planetlab. For some reason my code was working on friday and now that I wanted to test a minor change it refuses to work at all: Traceback (most recent call last): File "acn_a4/src/node.py", line 6, in <module> from twisted.internet.protocol import DatagramProtocol File "/usr/lib/python2.5/site-packages/Twisted-10.0.0-py2.5-linux-i686.egg/twisted/__init__.py", line 18, in <module> from twisted.python import compat File "/usr/lib/python2.5/site-packages/Twisted-10.0.0-py2.5-linux-i686.egg/twisted/python/compat.py", line 146, in <module> import operator File "/home/cdecker/dev/acn/acn_a4/src/operator.py", line 7, in <module> File "/home/cdecker/acn_a4/src/node.py", line 6, in <module> from twisted.internet.protocol import DatagramProtocol File "/usr/lib/python2.5/site-packages/Twisted-10.0.0-py2.5-linux-i686.egg/twisted/internet/protocol.py", line 20, in <module> from twisted.python import log, failure, components File "/usr/lib/python2.5/site-packages/Twisted-10.0.0-py2.5-linux-i686.egg/twisted/python/log.py", line 19, in <module> from twisted.python import util, context, reflect File "/usr/lib/python2.5/site-packages/Twisted-10.0.0-py2.5-linux-i686.egg/twisted/python/util.py", line 5, in <module> import os, sys, hmac, errno, new, inspect, warnings File "/usr/lib/python2.5/inspect.py", line 32, in <module> from operator import attrgetter ImportError: cannot import name attrgetter And since I'm pretty new to python I have no idea what could have caused this problem. All suggestions are welcome :-)

    Read the article

  • Managing common components with Fossil CVS

    - by Larry Lustig
    I'm a Fossil (and CVS configuration novice) attempting to create and manage a set of distributed Fossil repositories for a Delphi project. I have the following directory tree: Projects Some Project Delphi Components LookupListView Some Client Some Project For Client Some Other Project For Client Source Code Project Resources Project Database I am setting up Fossil version control in order to version and share Projects\Some Client\Some Other Project For Client\Source Code, which contains Delphi 2010 source for a database project. This project makes use of Projects\Delphi Components\LookupListView which is a Delphi component. I need this code to be included in the versioning system for my project. I will, in theory, need to include it in other Fossil repositories in the future, as well. If I create my Fossil repository at the Source Code or Some Other Project For Client level, I cannot add any code above that level to my repository. What is the proper way to deal with this? The two solutions that occur to me are 1) Creating a separate repository for LookupListView and make sure that everyone who uses a repository for a project that references it "knows" that they must also get the current version of this project as well. This seems to defeat the purpose of being able to obtain a complete, current version of the project with a single checkout. The problem is magnified because there are other common component dependencies in this project. 2) Establishing my Fossil repository in the Projects directory, so I can check in files from various subfolders. This seems to me to involve an awful lot of extra path-typing when doing adds, and also to impose my directory structure (Some Client\Some Other Project For Client\Source) on the other users of the repository -- in this case, the actual client. Any suggestions appreciated.

    Read the article

  • Optimized Publish/Subcribe JMS Broker Cluster and Conflicting Posts on StackOverFlow for the Answer

    - by Gene
    Hi, I am looking to build a publish/subscribe distributed messaging framework that can manage huge volumes of message traffic with some intelligence at the broker level. I don't know if there's a topology that describes this, but this is the model I'm going after: EXAMPLE MODEL A A) There are two running message brokers (ideally all on localhost if possible, for easier demo-ing) : Broker-A Broker-B B) Each broker will have 2 listeners and 1 publisher. Example Figure [subscriber A1, subscriber A2, publisher A1] <-- BrokerA <-- BrokerB <-- [publisher B1, subscriber B1, subscriber B2] IF a message-X is published to broker A and there no subscribers for it among the listeners on Broker-B (via criteria in Message Selectors or Broker routing rules), then that message-X will never be published to Broker-B. ELSE, broker A will publish the message to broker B, where one of the broker B listeners/subscribers/services is expecting that message based on the subscription criteria. Is Clustering the Correct Approach? At first, I concluded that the "Broker Clustering" concept is what I needed to support this. However, as I have come to understand it, the typical use of clustering entails either: message redundancy across all brokers ... or Competing Consumers pattern ... and neither of these satisfy the requirement in the EXAMPLE MODEL A. What is the Correct Approach? My question is, does anyone know of a JMS implementation that supports the model I described? I scanned through all the stackoverflow post titles for the search: JMS and Cluster. I found these two informative, but seemingly conflicting posts: Says the EXAMPLE MODEL A is/should-be implicitly supported: http://stackoverflow.com/questions/2255816/jms-consumer-with-activemq-network-of-brokers " this means you pick a broker, connect to it, and let the broker network sort it out amongst themselves. In theory." Says the EXAMPLE MODEL A IS NOT suported: http://stackoverflow.com/questions/2017520/how-does-a-jms-topic-subscriber-in-a-clustered-application-server-recieve-message "All the instances of PropertiesSubscriber running on different app servers WILL get that message." Any suggestions would be greatly appreciated. Thanks very much for reading my post, Gene

    Read the article

  • PCA extended face recognition

    - by cMinor
    The state of the art says that we can use PCA to perform face recognition. like this, this or this I am working with a project that involves training a classifier to detect a person who is wearing glasess or hats or even a mustache. The purpose of doing this is to detect when a person that has robbed a bank, store, or have commeted some sort of crime(s) (we have their image in a database), enters a certain place ( historically we know these guys have robbed, so we should take care to avoid problems). We came first to have a distributed database with all images of criminals, then I thought to have a layer of them clasifying these criminals using accesories like hats, mustache or anything that hides their face etc... Then, to apply that knowledge to detect when a particular or a suspect person enters a comercial place. ( In practice when someone is going to rob not all the times they are using an accesorie...) What do you think about this idea of doing PCA to first detect principal components of the face and then the components of an accesory. I was thinking that maybe a probabilistic approach is better so we can compute the probability the criminal is the person that entered a place and call the respective authorities.

    Read the article

  • How can one setup a version control system on a local network, without a server?

    - by Andrew
    Edit: Ok so I learned that I guess I need an distributed source control, however are there any UI based ones, and do they allow you to merge with other users on the network? This is kind of a two part question, so here it goes. I want to start developing a web application at home (with multiple developers). However, I don't have a dedicated server nor want to pay for on. So first, I don't know which version control system to use for this case, as at work we mostly have TFS setup, so I am not to familiar with whats out there. What are the best free CVS/SVN tools out there? Second, is it possible to somehow setup the CVS/SVN where there is no dedicated server and both clients store up to one week of the source code from the last check-in? Also, it would be helpful if it could integrate with visual studio, again this isn't that important at all. Problem: There are Five users, one is a Server. Server Connected: All Ok Server Disconnected: No one can share. What I am looking for: No Server: Users still have versioning based on version id of last check-in. Users must check all version on network to make sure they aren't outdated based on their last version id. If not check-in, otherwise merge/get latest. If they are update checkin, and set current version id +1.

    Read the article

  • Make WebStart Java desktop application to start on system startup on Windows and Mac

    - by parxier
    I developed small cross-platform (Windows and Mac) SWT desktop application. It is distributed with WebStart. So far so good, everything works. I've got a new requirement to make my app start on system startup (with no user interaction). What is the best way to accomplish that? In JNLP file I've got this: <shortcut online="false"> <desktop/> <menu submenu="CompanyName"/> </shortcut> On Windows WebStart creates a desktop link [app_name].lnk and it points to javaws.exe and then some Java cache file as a parameter with funny name like ..\Sun\Java\Deployment\cache\6.0\4\2c0a6a781-213476. I can possibly programmatically find that link on user's machine by name... erm... and then copy it into user's Startup folder. I can see a problem here though as user can disable WebStart desktop shortcut creation option all together. On Mac WebStart pops up a dialog to prompt user for the location where to create an [app_name].app (user is allowed to change link name there!) file that launches an application. On Mac I don't event know where the Startup folder is located (and it seems to be much more complex there). Is there Java library out there that abstracts start app on system startup concept on different platforms as SWT does for GUI abstraction?

    Read the article

  • Version control for PHP Developement

    - by Vinod Mohan
    Hello, I have been hearing a lot about the advantages of using a version control system and would like to try one. I was doing freelance web development in PHP for the past 2 years, two months back I hired two more programmer to help me. I will be hiring one more person soon. We maintain 4 websites, all of which are my own, which are continuously being edited by one of us. I learned PHP by myself and have never worked in any other firms. Hence I am new to version control, unit testing and all. Currently, we have development servers on our workstations. When we edit a particular section of a site, we download the code for that particular section (say /news/ or /movies/ or /wallpapers/ ) from the production server to the dev server, makes the changes locally and uploads to the production server (no code review / testing). Because of this, our dev server is always out of date from our prod server. Occasionally, this also create problem when one of us forgets to download the latest copy from prod and overwrites the last change. I know this is very very foolish, but currently our prod server is the only copy that has all the updates and latest changes. Can anyone please suggest which is the best version control system for me? I am more interested in distributed version control, since we don't have a central backup for all our code. I read about Mercurial and Git and found that Mercurial is used in several large open source projects by Mozilla, Sun, Symbian etc. So which one do you think is better for me? Not only version control, if there are any other package that I can use to make my current setup better, please mention that too :) Thanks and Regards, Vinod Mohan

    Read the article

  • What encoding does InstallShield expect non-latin-alphabet string table entries to use?

    - by DNS
    I work on an app that gets distributed via a single installer containing multiple localizations. The build process includes a script that updates the .ism string table with translations for each supported language. This works fine for languages like French and German. But when testing the installer in, i.e. Japanese, the text shows up as a series of squares. It's unlikely to be a font problem, since the InstallShield-supplied strings show up fine; only the string table entries are mangled. So the problem seems to be that the strings are in the wrong encoding. The .ism is in XML format, with UTF-8 declared as its encoding, so I assumed the strings needed to be UTF-8 encoded as well. Do they actually need to use the encoding of the target platform? Is there any concern, then, about targets having different encodings, i.e. Chinese systems using one GB-encoding versus another? What is the right thing to do here?

    Read the article

  • Bundle a Python app as a single file to support add-ons or extensions?

    - by Brandon Craig Rhodes
    There are several utilities — all with different procedures, limitations, and target operating systems — for getting a Python package and all of its dependencies and turning them into a single binary program that is easy to ship to customers: http://wiki.python.org/moin/Freeze http://www.pyinstaller.org/ http://www.py2exe.org/ http://svn.pythonmac.org/py2app/py2app/trunk/doc/index.html My situation goes one step further: third-party developers will be wanting to write plug-ins, extensions, or add-ons for my application. It is, of course, a daunting question how users on platforms like Windows would most easily install plugins or addons in such a way that my app can easily discover that they have been installed. But beyond that basic question is another: how can a third-party developer bundle their extension with whatever libraries the extension itself needs (which might be binary modules, like lxml) in such a way that the plugin's dependencies become available for import at the same time that the plugin becomes available. How can this be approached? Will my application need its own plug-in area on disk and its own plug-in registry to make this tractable? Or are there general mechanisms, that I could avoid writing myself, that would allow an app that is distributed as a single executable to look around and find plugins that are also installed as single files?

    Read the article

  • NonStop ODBC: how the connections (ODBC servers) are assigned to CPUs?

    - by Vladimir Dyuzhev
    We have an ODBC pool running on a NonStop server. The pool is connected to SQL/MX. This pool is used by a few external Java applications, each of which has an JDBC pool connected to ODBC pool (e.g. 14 connections per application). With time (after a few application recycles) we see an imbalance between CPUs -- some have 8 ODBC processes running, some only 5. That leads to CPU time imbalance too. Up to this point we assumed that a CPU is assigned to ODBC process in round-robin fashion. That would maintain the number of ODBC processes more or less equally distributed. It's not the case though. Is there any information on how ODBC pool decided which CPU to choose for every new allocated process? Does it look at CPU load? Available memory? Something else? Sadly, even HP's own people (available to us, that is) couldn't answer those questions with certainty. :-(

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Node.js/ v8: How to make my own snapshot to accelerate startup

    - by Anand
    I have a node.js (v0.6.12) application that starts by evaluating a Javascript file, startup.js. It takes a long time to evaluate startup.js, and I'd like to 'bake it in' to a custom build of Node if possible. The v8 source directory distributed with Node, node/deps/v8/src, contains a SconScript that can almost be used to do this. On line 302, we have LIBRARY_FILES = ''' runtime.js v8natives.js array.js string.js uri.js math.js messages.js apinatives.js date.js regexp.js json.js liveedit-debugger.js mirror-debugger.js debug-debugger.js '''.split() Those javascript files are present in the same directory. Something in the build process apparently evaluates them, takes a snapshot of state, and saves it as a byte string in node/out/Release/obj/release/snapshot.cc (on Mac OS). Some customization of the startup snapshot is possible by altering the SconScript. For example, I can change the definition of the builtin Date.toString by altering date.js. I can even add new global variables by adding startup.js to the list of library files, with contents global.test = 1. However, I can't put just any javascript code in startup.js. If it contains Date.toString = 1;, an error results even though the code is valid at the node repl: Build failed: -> task failed (err #2): {task: libv8.a SConstruct -> libv8.a} make: *** [program] Error 1 And it obviously can't make use of code that depends on libraries Node adds to v8. global.underscore = require('underscore'); causes the same error. I'd ideally like a tool, customSnapshot, where customSnapshot startup.js evaluates startup.js with node and then dumps a snapshot to a file, snapshot.cc, which I can put into the node source directory. I can then build node and tell it not to rebuild the snapshot.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >