Search Results

Search found 3050 results on 122 pages for 'soa cluster'.

Page 114/122 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Maintain order of messages via proxies to app servers

    - by David Turner
    Hi, I am receiving messages from a 3rd party via a http post, and it is important that the order the messages hit our infrastructure is maintained through the load balancers and proxies until it hits our application server. Quick Diagram. (proxies in place due to security requirements.) [ACE load balancer] - [2 proxies] - [Application Servers] or maybe [ACE load balancer] - [2 proxies] - [ACE load balancer] - [Application Servers] My idea was that I would setup the load balancers in active-passive mode, to force all messages to use one proxy, and then both the proxies would hit a second load balancer that would be configured in active passive to hit one application server. Whilst the above is not ideal, it does give me resilience, and once the message is in my app servers, I enter a stateless world, and load balance across both nodes of my cluster. However, I am concerned that even a single proxy could send messages out of order, perhaps if 2 messages are recived very close together, message 2 might get processed faster than message 1. Is this possible? Likely? Is there a simple open source proxy (MOD_PROXY?) that can be easily configured to just pass messages through it, and to guarantee to send the messages through in the order they are received. If so which, and finally links to how I should configure it would be great. In fact any links to articles around avoiding "out of order" messages using hardware would be gratefully received. Thanks, ps for those that are interested, the app is a java spring integration application currently on a appliation server.

    Read the article

  • shell scripting: search/replace & check file exist

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN -o`; isdone=`ls *.txt | grep $PATTERN -o`; whatsleft=todo - isdone; # what's the unix magic? #tack on the .xml prefix with sed or something #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep? To clarify: so the simplest way to do what i'm asking is (in pseudocode) for i in `/bin/ls *.xml` do replace xml suffix with txt if [that file exists] add to whatsleft list end done

    Read the article

  • Problem with JMX query of Coherence node MBeans visible in JConsole

    - by Quinn Taylor
    I'm using JMX to build a custom tool for monitoring remote Coherence clusters at work. I'm able to connect just fine and query MBeans directly, and I've acquired nearly all the information I need. However, I've run into a snag when trying to query MBeans for specific caches within a cluster, which is where I can find stats about total number of gets/puts, average time for each, etc. The MBeans I'm trying to access programatically are visible when I connect to the remote process using JConsole, and have names like this: Coherence:type=Cache,service=SequenceQueue,name=SEQ%GENERATOR,nodeId=1,tier=back It would make it more flexible if I can dynamically grab all type=Cache MBeans for a particular node ID without specifying all the caches. I'm trying to query them like this: QueryExp specifiedNodeId = Query.eq(Query.attr("nodeId"), Query.value(nodeId)); QueryExp typeIsCache = Query.eq(Query.attr("type"), Query.value("Cache")); QueryExp cacheNodes = Query.and(specifiedNodeId, typeIsCache); ObjectName coherence = new ObjectName("Coherence:*"); Set<ObjectName> cacheMBeans = mBeanServer.queryMBeans(coherence, cacheNodes); However, regardless of whether I use queryMBeans() or queryNames(), the query returns a Set containing... ...0 objects if I pass the arguments shown above ...0 objects if I pass null for the first argument ...all MBeans in the Coherence:* domain (112) if I pass null for the second argument ...every single MBean (128) if I pass null for both arguments The first two results are the unexpected ones, and suggest a problem in the QueryExp I'm passing, but I can't figure out what the problem is. I even tried just passing typeIsCache or specifiedNodeId for the second parameter (with either coherence or null as the first parameter) and I always get 0 results. I'm pretty green with JMX — any insight on what the problem is? (FYI, the monitoring tool will be run on Java 5, so things like JMX 2.0 won't help me at this point.)

    Read the article

  • I'v installed NetBeans 6.8 on my MacOS X MacBook and the logs say it cannot be run, any ideas?

    - by codezealot
    I've installed NetBeans 6.8 on my MacBook, and the installation results indicated success. However, every single time I attempt to run the application is shuts down. I monitored the process and noticed the following entries in the console that imply the application cannot be found? 3/19/10 10:20:20 PM [0x0-0x22022].org.netbeans.ide.baseide.200912041610[22168] /Applications/NetBeans/NetBeans 6.8.app/Contents/MacOS/netbeans: line 57: dirname: command not found 3/19/10 10:20:20 PM [0x0-0x22022].org.netbeans.ide.baseide.200912041610[22168] Cannot read cluster file: /../etc/netbeans.clusters 3/19/10 10:20:20 PM com.apple.launchd.peruser.501[77] ([0x0-0x22022].org.netbeans.ide.baseide.200912041610[22168]) Exited with exit code: 1 I started researching how to set the default JDK for use by NetBeans, and found repeated use of the following command line entry; netbeans --jdkhome /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home -- from http://wiki.netbeans.org/JDKVersionAndMacOS When I attempt the command line above, I receive "netbeans command not found". So do I ... 1.) Need to create a command called "netbeans" that points to my install location for NetBeans 6.8? If so how do I do that? 2.) How do I get to the netbeans.conf file for NetBeans 6.8, does one even exist for it? It gets even more interesting, the above happens with Eclipse as well. Yippie.

    Read the article

  • How to perform spatial partitioning in n-dimensions?

    - by kevin42
    I'm trying to design an implementation of Vector Quantization as a c++ template class that can handle different types and dimensions of vectors (e.g. 16 dimension vectors of bytes, or 4d vectors of doubles, etc). I've been reading up on the algorithms, and I understand most of it: here and here I want to implement the Linde-Buzo-Gray (LBG) Algorithm, but I'm having difficulty figuring out the general algorithm for partitioning the clusters. I think I need to define a plane (hyperplane?) that splits the vectors in a cluster so there is an equal number on each side of the plane. [edit to add more info] This is an iterative process, but I think I start by finding the centroid of all the vectors, then use that centroid to define the splitting plane, get the centroid of each of the sides of the plane, continuing until I have the number of clusters needed for the VQ algorithm (iterating to optimize for less distortion along the way). The animation in the first link above shows it nicely. My questions are: What is an algorithm to find the plane once I have the centroid? How can I test a vector to see if it is on either side of that plane?

    Read the article

  • Hazelcast Distributed Executor Service KeyOwner

    - by János Veres
    I have problem understanding the concept of Hazelcast Distributed Execution. It is said to be able to perform the execution on the owner instance of a specific key. From Documentation: <T> Future<T> submitToKeyOwner(Callable<T> task, Object key) Submits task to owner of the specified key and returns a Future representing that task. Parameters: task - task key - key Returns: a Future representing pending completion of the task I believe that I'm not alone to have a cluster built with multiple maps which might actually use the same key for different purposes, holding different objects (e.g. something along the following setup): IMap<String, ObjectTypeA> firstMap = HazelcastInstance.getMap("firstMap"); IMap<String, ObjectTypeA_AppendixClass> secondMap = HazelcastInstance.getMap("secondMap"); To me it seems quite confusing what documentation says about the owner of a key. My real frustration is that I don't know WHICH - in which map - key does it refer to? The documentation also gives a "demo" of this approach: import com.hazelcast.core.Member; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.IExecutorService; import java.util.concurrent.Callable; import java.util.concurrent.Future; import java.util.Set; import com.hazelcast.config.Config; public void echoOnTheMemberOwningTheKey(String input, Object key) throws Exception { Callable<String> task = new Echo(input); HazelcastInstance hz = Hazelcast.newHazelcastInstance(); IExecutorService executorService = hz.getExecutorService("default"); Future<String> future = executorService.submitToKeyOwner(task, key); String echoResult = future.get(); } Here's a link to the documentation site: Hazelcast MultiHTML Documentation 3.0 - Distributed Execution Did any of you guys figure out in the past what key does it want?

    Read the article

  • Retrieve a list of the most popular GET param variations for a given URL?

    - by jamtoday
    I'm working on building intelligence around link propagation, and because I need to deal with many short URL services where a reverse-lookup from an exact URL address is required, I need to be able to resolve multiple approximate versions of the same URL. An example would be a URL like http://www.example.com?ref=affil&hl=en&ct=0 Of course, changing GET params in certain circumstances can refer to a completely different page, especially if the GET params in question refer to a profile or content ID. But a quick parse of the page would quickly determine how similar the pages were to each other. Using a bit of machine learning, it could quickly become clear which GET params don't effect the content of the pages returned for a given site. I'm assuming a service to send a URL and get a list of very similar URLs could only be offered by the likes of Google or Yahoo (or Twitter), but they don't seem to offer this feature, and I haven't found any other services that do. If you know of any services that do cluster together groups of almost identical URLs in the aforementioned way, please let me know. My bounty is a hug.

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

  • SSRS2005 timeout error

    - by jaspernygaard
    Hi I've been running around circles the last 2 days, trying to figure a problem in our customers live environment. I figured I might as well post it here, since google gave me very limited information on the error message (5 results to be exact). The error boils down to a timeout when requesting a certain report in SSRS2005, when a certain parameter is used. The deployment scenario is: Machine #1 Running reporting services (SQL2005, W2K3, IIS6) Machine #2 Running datawarehouse database (SQL2005, W2K3) which is the data source for #1 Both machines are running on the same vm cluster and LAN. The report requests a fairly simple SP - lets called it sp(param $a, param $b). When requested with param $a filled, it executes correctly. When using param $b, it times out after the global timeout periode has passed. If I run the stored procedure with param $b directly from sql management studio on #2, it returns the results perfectly fine (within 3-4s). I've profiled the datawarehouse database on #2 and when param $b is used, the query from the reporting service to the database, never reaches #2. The error message that I get upon timeout, when using param $b, when invoking the report directly from SSRS web interface is: "An error has occurred during report processing. Cannot read the next data row for the data set DataSet. A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user." The ExecutionLog for the SSRS does give me much information besides the error message rsProcessingAborted I'm running out of ideas of how to nail this problem. So I would greatly appreciate any comments, suggestions or ideas. Thanks in advance!

    Read the article

  • Is it possible to write map/reduce jobs for Amazon Elastic MapReduce using .NET?

    - by Chris
    Is it possible to write map/reduce jobs for Amazon Elastic MapReduce (http://aws.amazon.com/elasticmapreduce/) using .NET languages? In particular I would like to use C#. Preliminary research suggests not. The above URL's marketing text suggests you have a "choice of Java, Ruby, Perl, Python, PHP, R, or C++", without mentioning .NET languages. This Amazon thread (http://developer.amazonwebservices.com/connect/thread.jspa?messageID=136051 -- "Support for C# / F# map/reducers") explicitly says that "currently Amazon Elastic MapReduce does not support Mono platform or languages such as C# or F#." The above suggests that it can't be done. I'm wondering if there are any workarounds, though. For example, can I modify the Elastic MapReduce machine image for my account, and install Mono on there? An alternative, suggested by Amazon FAQs "Using Other Software Required by Your Jar" (http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/index.html?CHAP_AdvancedTopics.html) and "How to Use Additional Files and Libraries With the Mapper or Reducer" (http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/index.html?addl_files.html), is to make the first step of the Map/Reduce job be to install Mono on the local instance. That sounds kind of inefficient, but maybe it could work? Maybe a saner alternative would be to try to forgo the convenience of Elastic MapReduce, and manually set up my own Hadoop cluster on EC2. Then I assume I could install Mono without difficulty.

    Read the article

  • SQL Server 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • Debugging Actionmailer

    - by Trip
    I have actionmailer set up. Emails are not being sent, and no errors. Where can I start my search to debug this? class Notifier < ActionMailer::Base default_url_options[:host] = APP_DOMAIN def email_blast(user, subject, message) subject subject from NOTIFIER_EMAIL recipients user.email sent_on Time.zone.now body :user => user.first_name + ' ' + user.last_name, :message => message end I do get a return in my log that the email was sent, just no actual email goes through. Also the reason, that this is not working is because I switched form a cluster to a solo box and some server settings were overwritten. I suspect that is probably the reason why this is not working. Anyone know what specific server settings I would have to look at ? UPDATE: ActionMailer::Base.delivery_method = :sendmail config.action_mailer.default_url_options = { :host => "75.101.153.93" } I found this in my production.rb . This code was originally here when it worked. Again, I believe that there must be something missing on my server..I did a 'which sendmail' and it returned /usr/bin/sendmail , so I added this : config.action_mailer.raise_delivery_errors = false config.action_mailer.perform_deliveries = true config.action_mailer.sendmail_settings = { :location => '/usr/bin/sendmail', :arguments => '-i -t' } Redeployed, restarted the server, and tested it. No emails were sent. The production.log said something was sent : Processing MediaController#create_a_video (for 173.161.167.41 at 2010-06-03 11:58:13) [GET] Parameters: {"action"=>"create_a_video", "controller"=>"media", "organization_id"=>"470", "_"=>"1275591493194"} Sent mail to [email protected] Rendering media/create_a_video Completed in 128ms (View: 51, DB: 1) | 200 OK [http://invent.hqchannel.com/organizations/470/media/create_a_video?_=1275591493194]

    Read the article

  • Hadoop reduce task gets hung

    - by user806098
    I set up a hadoop cluster with 4 nodes, When running a map-reduce task, the map task finishes quickly, while the reduce task hangs at 27% percent. I checked the log, it's that the reduce task fails to fetch map output from map nodes. The job tracker log of master shows messages like this: 2011-06-27 19:55:14,748 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201106271953_0001_r_000000_0' to tip task_201106271953_0001_r_000000, for tracker 'tracker_web30.bbn.com.cn:localhost/127.0.0.1:56476' And the name node log of master shows messages like this: 2011-06-27 14:00:52,898 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310, call register(DatanodeRegistration(202.106.199.39:50010, storageID=DS-1989397900-202.106.199.39-50010-1308723051262, infoPort=50075, ipcPort=50020)) from 192.168.225.19:16129: error: java.io.IOException: verifyNodeRegistration: unknown datanode 202.106.199.3 9:50010 However, neither the "web30.bbn.com.cn" or 202.106.199.39, 202.106.199.3 is the slave node. I think such ip/domains appear because hadoop fails to resolve a node(first in the Intranet DNS server), then it goes to a higher-level DNS server, later to the top, still fails, then the "junk" ip/domains are returned. But I checked my config, it goes like this: /etc/hosts: 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.225.16 master 192.168.225.66 slave1 192.168.225.20 slave5 192.168.225.17 slave17 conf/core-site.xml: hadoop.tmp.dir /root/hadoop_tmp/hadoop_${user.name} fs.default.name hdfs://master:54310 io.sort.mb 1024 hdfs-site.xml: dfs.replication 3 masters: master slaves: master slave1 slave5 slave17 Also, all firewalls(iptables) are turned off, and ssh between each 2 nodes is ok. so I don't know where exact the error comes from. Please help. Thanks a lot.

    Read the article

  • How to keep your unit test Arrange step simple and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • How to keep your unit tests simple and isolated and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • CORBA on MacOS X (Cocoa)

    - by user8472
    I am currently looking into different ways to support distributed model objects (i.e., a computational model that runs on several different computers) in a project that initially focuses on MacOS X (using Cocoa). As far as I know there is the possibility to use the class cluster around NSProxy. But there also seem to be implementations of CORBA around with Objective-C support. At a later time there may be the need to also support/include Windows machines. In that case I would need to use something like Gnustep on the Windows side (which may be an option, if it works well) or come up with a combination of both technologies. Or write something manually (which is, of course, the least desirable option). My questions are: If you have experience with both technologies (Cocoa native infrastructure vs. CORBA) can you point out some key features/issues of either approach? Is it possible to use Gnustep with Cocoa in the way explained above? Is it possible (and reasonably feasible, i.e. simpler than writing a network layer manually) to communicate among all MacOS clients using Cocoa's technology and with Windows clients through CORBA?

    Read the article

  • Simplest distributed persistent key/value store that supports primary key range queries

    - by StaxMan
    I am looking for a properly distributed (i.e. not just sharded) and persisted (not bounded by available memory on single node, or cluster of nodes) key/value ("nosql") store that does support range queries by primary key. So far closest such system is Cassandra, which does above. However, it adds support for other features that are not essential for me. So while I like it (and will consider using it of course), I am trying to figure out if there might be other mature projects that implement what I need. Specifically, for me the only aspect of value I need is to access it as a blob. For key, however, I need range queries (as in, access values ordered, limited by start and/or end values). While values can have structures, there is no need to use that structure for anything on server side (can do client-side data binding, flexible value/content types etc). For added bonus, Cassandra style storage (journaled, all sequential writes) seems quite optimal for my use case. To help filter out answers, I have investigated some alternatives within general domain like: Voldemort (key/value, but no ordering) and CouchDB (just sharded, more batch-oriented); and am aware of systems that are not quite distributed while otherwise qualifying (bdb variants, tokyo cabinet itself (not sure if Tyrant might qualify), redis (in-memory store only)).

    Read the article

  • Developing on both Windows & Linux machines simultaneously

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • How to make ActionController::Base.asset_host and Base.relative_url_root independent?

    - by GSP
    In our Intranet environment we have a decree that common assets (stylesheets, images, etc) should be fed from Apache root while Rails apps runs from from a "sub directory" (a proxy to a Mongrel cluster). In other words: <%= stylesheet_tag '/common' %> # <link href="http://1.1.1.1/stylesheets/common.css" /> <%= link_to 'Home', :controller=>'home', :action=>'index' %> # <a href="http://1.1.1.1/myapp/" /> What I would like to do is define the assets and relative url in my configuration like this: ActionController::Base.asset_host=http://1.1.1.1 ActionController::Base.relative_url_root=/myapp But when I do this, the relative_url_root value gets appended to the asset_host value. (e.g. <link href="http://1.1.1.1/myapp/stylesheets/common.css"> ) Is there a way to stop this? And, more importantly, is there a best practice for how to deploy a rails app to an environment like this? (BTW, I don't want to simply hard code my asset paths since the development environment does not match the test and production environments.) Environment: Rails 2.3.4 Ruby 1.8.7 RedHat Linux 5.4?

    Read the article

  • Time with and without OpenMP

    - by was
    I have a question.. I tried to improve a well known program algorithm in C, FOX algorithm for matrix multiplication.. relative link without openMP: (http://web.mst.edu/~ercal/387/MPI/ppmpi_c/chap07/fox.c). The initial program had only MPI and I tried to insert openMP in the matrix multiplication method, in order to improve the time of computation: (This program runs in a cluster and computers have 2 cores, thus I created 2 threads.) The problem is that there is no difference of time, with and without openMP. I observed that using openMP sometimes, time is equivalent or greater than the time without openMP. I tried to multiply two 600x600 matrices. void Local_matrix_multiply( LOCAL_MATRIX_T* local_A /* in */, LOCAL_MATRIX_T* local_B /* in */, LOCAL_MATRIX_T* local_C /* out */) { int i, j, k; chunk = CHUNKSIZE; // 100 #pragma omp parallel shared(local_A, local_B, local_C, chunk, nthreads) private(i,j,k,tid) num_threads(2) { /* tid = omp_get_thread_num(); if(tid == 0){ nthreads = omp_get_num_threads(); printf("O Pollaplasiamos pinakwn ksekina me %d threads\n", nthreads); } printf("Thread %d use the matrix: \n", tid); */ #pragma omp for schedule(static, chunk) for (i = 0; i < Order(local_A); i++) for (j = 0; j < Order(local_A); j++) for (k = 0; k < Order(local_B); k++) Entry(local_C,i,j) = Entry(local_C,i,j) + Entry(local_A,i,k)*Entry(local_B,k,j); } //end pragma omp parallel } /* Local_matrix_multiply */

    Read the article

  • What kind of library to use for display of graphical objects and right click context menus

    - by Gopal
    Hi all, Goal: To develop a web based NMS interface which displays a network topology (e.g., switches, routers, links, endhosts). Each node should be 'movable' (draggable to an appropriate place manually or their best location computed algorithmically). I should be able to zoom into the network graph (say if there are many clusters of nodes and I want to concentrate on a particular cluster of nodes). I should be able to right-click any node or link and get a context menu (e.g., 'show routing table', 'show interfaces', 'show bandwidth utilization graph' etc.). The data for this network topology will be fetched by making calls to an apache based webserver where the backend scripts in python will fetch the appropriate data and send it via JSON to the web client. Question: I am assuming that some sort of javascript library/framework will be most appropriate for this - jQuery, Dojo, Moo etc. [I've never used any of these before]. Which of these would be most recommended for this sort of thing. Which would be easiest to learn (say in a months time). Thanks for any tips.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >