Search Results

Search found 5206 results on 209 pages for 'dr rocket mr socket'.

Page 167/209 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • fastest (low latency) method for Inter Process Communication between Java and C/C++

    - by Bastien
    Hello, I have a Java app, connecting through TCP socket to a "server" developed in C/C++. both app & server are running on the same machine, a Solaris box (but we're considering migrating to Linux eventually). type of data exchanged is simple messages (login, login ACK, then client asks for something, server replies). each message is around 300 bytes long. Currently we're using Sockets, and all is OK, however I'm looking for a faster way to exchange data (lower latency), using IPC methods. I've been researching the net and came up with references to the following technologies: - shared memory - pipes - queues but I couldn't find proper analysis of their respective performances, neither how to implement them in both JAVA and C/C++ (so that they can talk to each other), except maybe pipes that I could imagine how to do. can anyone comment about performances & feasibility of each method in this context ? any pointer / link to useful implementation information ? thanks for your help

    Read the article

  • How do i set up a fully featured small business network?

    - by JoshReedSchramm
    This has the possibility to be a very large question but I recently acquired a few rack mount servers and the hardware necessary to run them. Unfortunately I'm a programmer with very little understanding of how to set up a good working network so I'm hoping someone on here might be able to help. What I want to do is run a domain with a series of subdomains which would all be externally accessible. The setup would live inside my home and my internet connection is your run of the mill cable model (which means a dynamic IP) I want to be able to set up a couple site, specifically: www.mycompany.com (mycompany.com with no subdomain would redirect to this) build.mycompany.com (for my continuous integration server) ruby.mycompany.com (for ruby projects) win.mycompany.com (for windows project) etc. Additionally this is still my home network so our personal machines need to be able to get on via wifi with at least the same security we have now through an out of the box router from best buy. I'm thinking i need a DNS server, DHCP server and one of those would run either no-ip or dyndns to accommodate the dynamic ip. I don't necessarily need mail but it might be helpful to have some sort of mail server i could use for testing, it doesn't need to get out to the greater internet though. So how do i set up this kinda of network? tl;dr Need to know how to set up your standard office style network in my home off my normal consumer level cable modem connection.

    Read the article

  • Testing install procedure of a program requiring administrative privileges

    - by Lucas Meijer
    I'm trying to write automated test, to ensure that the installer for my program works okay. The program can be installed for all users (requires admin privs), or for current user (does not require admin privs). The program can also autoupdate itself, which in some cases requires admin privileges, and in some cases doesn't. I'm looking for a way where I can have an automated test click "Yes, Allow" on the UAC dialogs, so I can write tests for all different scenarios, on many different operating systems, so that I can be confident when I make changes to the installer that I didn't break anything. Obviously, the installer process itself cannot do this. However, I control the complete machine, and could easily start some sort of daemon process with administrative rights, that the testprogram could make a socket connection to, to request it to "please click ok on the UAC now".

    Read the article

  • PPP connection with RAS dialer in C++

    - by user312054
    I have a windows mobile application that is using Windows CE 5.0. I have been informed by the people supplying the hardware for the unit that I need to create a socket, which I have done successfully, and then dial out to the internet with a PPP connection with a RAS dialer connection. Our old code uses an APN to dial out so I need to create the above connection with an APN. I am having trouble finding examples of this. Can someone point me to some examples of this situation?

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • Mono ASP.NET COM Reference

    - by Benny
    I am sure this is a very dumb question to be asking for such a platform as Mono, but I am really stuck with .NET on one of my only remaining projects on MS platforms and would like to move away from it. The only problem is that the web site is dependent on a COM library that is simply a socket wrapper enforcing a messaging protocol. I could reverse the code (I actually made a 10k line attempt) but there's nothing better than the original if it works. Is there any way to reference a tlb export on Mono? Any advice would be greatly appreciated. Thanks in advance!

    Read the article

  • non blocking tcp connect with epoll

    - by doccarcass
    My linux application is performing non-blocking TCP connect syscall and then use epoll_wait to detect three way handshake completion. Sometimes epoll_wait returns with both POLLOUT & POLLERR revents set for the same socket descriptor. I would like to understand what's going on at TCP level. I'm not able to reproduce it on demand. My guess is that between two calls to epoll_wait inside my event loop we had a SYN+ACK/ACK/FIN sequence but again I'm not able to reproduce it. Any clue ? Regards, Seb

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • Fast single thread comet server, possible?

    - by Pepijn
    I recently encountered a few cases where a server would distribute an event stream that contains the exact same data for all listeners, such as a 'recent activity' box. It occurred to me that it is quite strange and inefficient to have a server like Apache run a thread processing and querying the database for every single comet stream containing the same data. What I would do for those global(not per user) streams is run a single thread that continuously emits data, and a new (green)thread for every new request that outputs the headers and then 'merges' into the main thread. Is it possible for one thread to serve multiple sockets, or for multiple clients to listen to the same socket? An example o = event # threads received | a b # 3 o / / # 3 - |/_/ | # 1 o c # 2 a, b | / o/ # 2 a, b o # 1 a, b, c | # connection b closed o # 1 a, c Does something like this exist? Would it work? Is it possible to do? Disclaimer: I'm not a server expert.

    Read the article

  • Xen PV packet loss

    - by Delphinator
    I'm having some serious issues with packetloss with one of my servers. This server is a somewhat old (P4-era) machine running Debian Squeeze and Xen 4.0. There are two domUs running on it (both also Debian Squeeze), one gateway and a fileserver. Unfortunatly the processor has no virtualization extensions, therefore only PV can be used. While investigating why our network seems to be slower than it should I found some pretty bad packet loss (~25%). After further investigation and several experiments I did a measurment between the dom0 and one of the domUs: Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to dom0, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2(domU) port 33817 connected with 192.168.1.100(dom0) port 5001 [ 4] local 192.168.1.2(domU) port 5001 connected with 192.168.1.100(dom0) port 48606 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 46.3 MBytes 38.7 Mbits/sec [ 3] Sent 33020 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 46.2 MBytes 38.6 Mbits/sec 0.030 ms 89/33019 (0.27%) [ 3] 0.0-10.0 sec 1 datagrams received out-of-order [ 4] 0.0-10.2 sec 43.0 MBytes 35.3 Mbits/sec 13.074 ms 11575/42256 (27%) tl;dr: 27% packet loss from dom0 to domU with 50Mbit UDP packets. Same thing happens from anywhere in the network. The problem gets better for smaller bandwidths (0.047% for 5Mbit) and worse for higher (59% for 200Mbit) ones. I did increase the CPU-weight of the dom0, there is no swapping going on, and actual networking-hardware is not involved. I never expected Xen (or anything related) to drop packets, and I'm completly clueless what to try next.

    Read the article

  • Fast way to test if a port is in use using Python

    - by directedition
    I have a python server that listens on a couple sockets. At startup, I try to connect to these sockets before listening, so I can be sure that nothing else is already using that port. This adds about three seconds to my server's startup (which is about .54 seconds without the test) and I'd like to trim it down. Since I'm only testing localhost, I think a timeout of about 50 milliseconds is more than ample for that. Unfortunately, the socket.setdefaulttimeout(50) method doesn't seem to work for some reason. How I can trim this down?

    Read the article

  • Loading and storing encryption keys from a config source

    - by Hassan Syed
    I am writing an application which has an authenticity mechanism, using HMAC-sha1, plus a CBC-blowfish pass over the data for good measure. This requires 2 keys and one ivec. I have looked at Crypto++ but the documentation is very poor (for example the HMAC documentation). So I am going oldschool and use Openssl. Whats the best way to generate and load these keys using library functions and tools ? I don't require a secure-socket therefore a x.509 certificate probably does not make sense, unless, of-course, I am missing something. So, do I need to write my own config file, or is there any infrastructure in openssl for this ? If so, could you direct me to some documentation or examples for this.

    Read the article

  • PHPMailer, fsockopen(), possible Apache issue?

    - by danp
    I'm using PHPMailer to send out site contacts. In development, the script works perfectly with the GMail service over smtp. However, in production, inside the client's DMZ, it appears unable to connect to the SMTP service they have there. I have connected to the same service using telnet to port 25, so I know for sure it exists and is available to the server. Are there any circumstances where php might not be able to open a socket connection (fsockopen)...? The php extension openssl is loaded and ok. The error is "Unable to connect to SMTP service". Thanks!

    Read the article

  • Integrating Jython Cpython

    - by eric.frederich
    I am about to begin a project where I will likely use PyQt or Pyside. I will need to interface with a buggy 3rd party piece of server software that provides C++ and Java APIs. The Java APIs are a lot easier to use because you get Exceptions where with the C++ libraries you get segfaults. Also, the Python bindings to the Java APIs are automatic with Jython whereas the Python bindings for the C++ APIs don't exist. So, how would a CPython PyQt client application be able to communicate with these Java APIs? How would you go about it? Would you have another separate Java process on the client that serializes / pickles objects and communicates with the PyQt process over a socket? I don't want to re-invent the wheel... is there some sort of standard interface for these types of things? Some technology I should look into? RPC, Corba, etc? Thanks, ~Eric

    Read the article

  • Why is this the output of this python program?

    - by Andrew Moffat
    Someone from #python suggested that it's searching for module "herpaderp" and finding all the ones listed as its searching. If this is the case, why doesn't it list every module on my system before raising ImportError? Can someone shed some light on what's happening here? import sys class TempLoader(object): def __init__(self, path_entry): if path_entry == 'test': return raise ImportError def find_module(self, fullname, path=None): print fullname, path return None sys.path.insert(0, 'test') sys.path_hooks.append(TempLoader) import herpaderp output: 16:00:55 $> python wtf.py herpaderp None apport None subprocess None traceback None pickle None struct None re None sre_compile None sre_parse None sre_constants None org None tempfile None random None __future__ None urllib None string None socket None _ssl None urlparse None collections None keyword None ssl None textwrap None base64 None fnmatch None glob None atexit None xml None _xmlplus None copy None org None pyexpat None problem_report None gzip None email None quopri None uu None unittest None ConfigParser None shutil None apt None apt_pkg None gettext None locale None functools None httplib None mimetools None rfc822 None urllib2 None hashlib None _hashlib None bisect None Traceback (most recent call last): File "wtf.py", line 14, in <module> import herpaderp ImportError: No module named herpaderp

    Read the article

  • How to get the quickfix timestamp?

    - by yves Baumes
    I've seen in quickfix doxygen documentation that it generates an utc timestamp as soon as it has received a FIX message from a socket file. Have a look in ThreadedSocketConnection::processStream(), it calls then m_pSession->next( msg, UtcTimeStamp() ); I would like to get that timestamp, because I need it to screen network and QuickFix lib latencies. I didn't find a way to get it from FixApplication::fromApp() callback or 'Log::onIncoming()' callback. As I am newbie with quickfix I would like to know if I missed something in the Quickfix documentation. Did anybody ever done that before? Of course there is other solutions, but for homogeneity with others market acces applications I maintain, I would prefer to avoid them. For instance, I would prefer not to modify QuickFix code source. And I would like to avoid re-write the application logic that quickfix provide me, quickfix helpping me only for message decoding.

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • Private staff network within public network

    - by pianohacker
    I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way. Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least. Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). tl;dr: We have a public network, and a NATed staff network within it. My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget. (My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

    Read the article

  • How to free virtual memory ?

    - by Mehdi Amrollahi
    I have a crawler application (with C#) that downloads pages from web . The application take more virtual memory , even i dispose every object and even use GC.Collect() . This , have 10 thread and each thread has a socket that downloads pages . I use dispose method and even use GC.Collect() in my application , but in 3 hour my application take 500 MB on virtual memory (500 MB on private bytes in Process explorer) . Then my system will be hang and i should restart my pc . Is there any way that i use to free vitual memory ? Thanks .

    Read the article

  • Data Integration/EAI Project Lessons Learned

    - by Greg Harman
    Have you worked on a significant data or application integration project? I'm interested in hearing what worked for you and what didn't and how that affected the project both during and after implementation (i.e. during ongoing operation, maintenance and expansion). In addition to these lessons learned, please describe the project by including a quick overview of: The data sources and targets. Specifics are not necessary, but I'd like to know general technology categories e.g. RDBMS table, application accessed via a proprietary socket protocol, web service, reporting tool. The overall architecture of the project as related to data flows. Different human roles in the project (was this all done by one engineer? Did it include analysts with a particular expertise?) Any third-party products utilized, commercial or open source.

    Read the article

  • Installing PostGIS on Windows

    - by Cornflake
    I've installed PostgreSQL and PostGIS, and now I'm trying to follow these instructions: http://docs.djangoproject.com/en/dev/ref/contrib/gis/install/#spatialdb-template But I keep getting the following error, both in the command prompt and in Cygwin: C:\Users\Home>createdb -E UTF8 template_postgis createdb: could not connect to database postgres: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? And I know PostgreSQL is running, because I'm using it right now! Installing open source applications can sometimes be so frustrating... I'll be very grateful for your help!

    Read the article

  • Ruby: writing a network redirector

    - by Shyam
    Hi, I would like to research protocols such as HTTP. As I am learning Ruby, I would like to write a program that works as a "gateway". I would be connecting to it's port on for example 8080 and the program should forward my request to the real host and send back the answers. The idea of my design is something like this: class EchoProxy def run # run a listening socket on port 8080 myinfiniteloop end def myinfiniteloop # continually run this loop unless the app is terminated puts traffic end end Some pointers in the right direction would be great! Thank you for your comments, answers and feedback!

    Read the article

  • realloc()ing memory for a buffer used in recv()

    - by Hristo
    I need to recv() data from a socket and store it into a buffer, but I need to make sure get all of the data so I have things in a loop. So to makes sure I don't run out of room in my buffer, I'm trying to use realloc to resize the memory allocated to the buffer. So far I have: // receive response int i = 0; int amntRecvd = 0; char *pageContentBuffer = (char*) malloc(4096 * sizeof(char)); while ((amntRecvd = recv(proxySocketFD, pageContentBuffer + i, 4096, 0)) > 0) { i += amntRecvd; realloc(pageContentBuffer, 4096 + sizeof(pageContentBuffer)); } However, this doesn't seem to be working properly since Valgrind is complaining "valgrind: the 'impossible' happened:". Any advice as to how this should be done properly? Thanks, Hristo

    Read the article

  • Multi-Threading - Cleanup strategy at program end

    - by weismat
    What is the best way to finish a multi-threaded application in a clean way? I am starting several socket connections from the main thread in seperate sockets and wait until the end of my business day in the main thread and use currently System.Environment.Exit(0) to terminate it. This leads to an unhandled execption in one of the childs. Should I stop the threads from the list? I have been reluctant to implement any real stopping in the childs yet, thus I am wondering about the best practice. The sockets are all wrapped nicely with proper destructors for logging out and closing, but it still leads to errors.

    Read the article

  • Is there a dictionary about common programming vocabulary?

    - by _simon_
    When I need a name for a new class that extends behaviour of an existing class, I usually have hard time to come up with a name for it. For example, if I have a class MyClass, then the new class could be named something like MyClassAdapter, MyClassCalculator, MyClassDispatcher, MyClassParser,... This new name should of course represent the behaviour of the class and would ideally be same as the design pattern in which it is used (Adapter, Decorator, Factory,...). But since we don't overuse design patterns, this is not always the solution :) So, do you know for a dictionary or a list of common words, that we can use to represent the behaviour of the class, containing a short description of the expected behaviour? Some examples: replicator, shadow, token, acceptor, worker, mapper, driver, bucket, socket, validator, wrapper, parser, verifier,... You could also look at this list as a cheat sheet for metaphors, with which you can better understand your problem domain.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >