Search Results

Search found 5215 results on 209 pages for 'multi targeting'.

Page 176/209 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Java - How to detect that the internet connection has got disconnected through a java desktop applic

    - by Yatendra Goel
    I am developing a Java Desktop Application that access internet. It is a multi-threaded application, each thread do the same work (means each thread is an instance of same Thread class). Now, as all the threads need internet connection to be active, there should be some mechanism that detects whether an internet connection is active or not. Q1. How to detect whether the internet connection is active or not? Q2. Where to implement this internet-status-check-mechanism code? Should I start a separate thread for checking internet status regularly and notifies all the threads when the status changes from one state to another? Or should I let each thread check for the internet-status itself? Q3. This issue should be a very common issue as every application accessing an internet should deal with this problem. So how other developers usually deal with this problem? Q4. If you could give me a reference to a good demo application that addresses this issue then it would greatly help me.

    Read the article

  • Advice on a DB that can be uploaded to a website by a smart client for collecting survey feedback

    - by absfabs
    Hello, I'm hoping you can help. I'm looking for a zero config multi-user datbase that my winforms application can easily upload to a webserver folder (together with 1 or 2 classic asp pages) and am looking for some suggestions/recommendations. The idea is that the database will be used to collect feedback entered by people filling in the asp pages. The pages will write to the database using javascript. The database will subsequently be downloaded again for processing once the responses are in. In Summary: It will mostly run in MS Windows environments. I have a modest budget for this and do not mind paying for such a database. No runtime licensing costs. Should be xcopy - Once uploaded to a website folder it should be operational. It should not have a dotnet CLR dependency. It should support a resonable level of concurrent access. Average respondent count would be around 20-30 but one never knows. Should be a reasonable size so that uploads/downloads to and from the site will be reasonably fast. Would appreciate your suggestions/comments Many thanks Abz To clarify - this is a desktop commercial application for feedback management in a vertical market. It uses SQL Server as the backing store. The application currently provides feedback management from email and paper feedback. I now want to add web feedback capability. Getting users to to make their SQL servers accessible to a website is not at option at this time as I am want to make getting up and running as painless as possible. I intend to release a web based implementation of the software in the near future but for now am looking at the above as a pragmatic way to provide web based feedback collection.

    Read the article

  • spring 3 uploadify giving 404 Error

    - by Rajkumar
    I am using Spring 3 and implementing Uploadify. The problem is, the files are updating properly but it is giving HTTP Error 404, on completion of file upload. I tried every possible solution, but none of them works. The files are uploaded. Values are storing in DB properly, only that i am getting HTTP Error 404. Any help is appreciated and Thanks in advance. The JSP Page $(function() { $('#file_upload').uploadify({ 'swf' : 'scripts/uploadify.swf', 'fileObjName' : 'the_file', 'fileTypeExts' : '*.gif; *.jpg; *.jpeg; *.png', 'multi' : true, 'uploader' : '/photo/savePhoto', 'fileSizeLimit' : '10MB', 'uploadLimit' : 50, 'onUploadStart' : function(file) { $('#file_upload').uploadify('settings', 'formData', {'trip_id' :'1', 'trip_name' :'Sample Trip', 'destination_trip' :'Mumbai','user_id' :'1','email' :'[email protected]','city_id' :'12'}); }, 'onQueueComplete' : function(queueData) { console.log('queueData : '+queueData); window.location.href = "trip/details/1"; } }); }); The Controller @RequestMapping(value="photo/{action}", method=RequestMethod.POST) public String postHandler(@PathVariable("action") String action, HttpServletRequest request) { if(action.equals("savePhoto")) { try{ MultipartHttpServletRequest multipartRequest = (MultipartHttpServletRequest)request; MultipartFile file = multipartRequest.getFile("the_file"); String trip_id = request.getParameter("trip_id"); String trip_name = request.getParameter("trip_name"); String destination_trip = request.getParameter("destination_trip"); String user_id = request.getParameter("user_id"); String email = request.getParameter("email"); String city_id = request.getParameter("city_id"); photo.savePhoto(file,trip_id,trip_name,destination_trip,user_id,email,city_id); photo.updatetrip(photo_id,trip_id); }catch(Exception e ){e.printStackTrace();} } return ""; } spring config <bean class="org.springframework.web.multipart.commons.CommonsMultipartResolver" id="multipartResolver"> <property name="maxUploadSize" value="10000000"/> </bean>

    Read the article

  • Lookup site column not saving/storing metadata for Office 2007 documents?

    - by Greg Hurlman
    I'm having this issue on several server environments. We have a list at the site collection root. There is a site column created as a multi-value lookup on that list's Title field. This site column is used in document libraries in subsites as a required field. When we upload anything but an Office 2007 document, the user is presented with the document metadata fill-in screen (EditForm.aspx?Mode=Upload), the user fills in the appropriate data (including picking a value(s) for this lookup), and clicks "check in" - the document is checked in as expected, with the lookup field's value filled in. With an Office 2007 document, this fails. The user selected values for the lookup field do not ever make it to the server - no errors are thrown, but the field is not saved with the document. We have an event listener on these document libraries, and if we inspect the incoming SPListItem on the event listener method before a single line of our code has run, we see that the value for the lookup field is null. It smells like a SharePoint bug to me - but before I go calling Microsoft, has anyone seen this & worked around it? Edit: the only entry I see in the SP trace logs relating to the problem: CMS/Publishing/8ztg/Medium/Got List Item Version, but item was null

    Read the article

  • MSTest unit test passes by itself, fails when other tests are run

    - by Sarah Vessels
    I'm having trouble with some MSTest unit tests that pass when I run them individually but fail when I run the entire unit test class. The tests test some code that SLaks helped me with earlier, and he warned me what I was doing wasn't thread-safe. However, now my code is more complicated and I don't know how to go about making it thread-safe. Here's what I have: public static class DLLConfig { private static string _domain; public static string Domain { get { return _domain = AlwaysReadFromFile ? readCredentialFromFile(DOMAIN_TAG) : _domain ?? readCredentialFromFile(DOMAIN_TAG); } } } And my test is simple: string expected = "the value I know exists in the file"; string actual = DLLConfig.Domain; Assert.AreEqual(expected, actual); When I run this test by itself, it passes. When I run it alongside all the other tests in the test class (which perform similar checks on different properties), actual is null and the test fails. I note this is not a problem with a property whose type is a custom Enum type; maybe I'm having this problem with the Domain property because it is a string? Or maybe it's a multi-threaded issue with how MSTest works?

    Read the article

  • Why isn't this simple PHP/MySQL code working?

    - by Sammy
    I am very new to php/mysql and this is causing me to loose hairs, I am trying to build a multi level site navigation. In this part of my script I am readying the sub and parent categories coming from a form for insertion into the database: // get child categories $catFields = $_POST['categories']; if (is_array($catFields)) { $categories = $categories; for ($i=0; $i<count($catFields); $i++) { $categories = $categories . $catFields[$i]"; } } // get parent category $select = mysql_query ("SELECT parent FROM categories WHERE id = $categories"); while ($return = mysql_fetch_assoc($select)) { $parentId = $return['parent']; } The first part of my script works fine, it grabs all the categories that the user has chosen to assign a post by checking the checkboxes in a form and readies it for insertion into the database. But the second part does not work and I can't understand why. I am trying to match a category with a parent that is stored in it's own table, but it returns nothing even though the categories all have parents. Can anyone tell me why this is? p.s. The $categories variable contains the sub category id.

    Read the article

  • Simple multilingual CMS?

    - by Christoffer
    Hi, I have been searching for a while now for a dead simple CMS with multi-language support. The ideal candidate is very lean and offers the possibility to set up different languages for different domains. It's OK if the language support is provided by a plugin/extension. For example I want example.com to point to English and example.fr should be French. With different URI-mappings for SEO. It can be developed in either of PHP, Ruby or Python and has to be open source. Any tips? Thank you EDIT / MORE DETAILS What I want is a CMS that is as simple to use and grasp for a client as Radiant is, but with tabs on each resource that can translate articles to different languages. Languages have to be able to use multiple domains, one for each language. I want to easily use the same article for more than one language as well as have articles (e.g. blog posts or news stories) that are only connected to one language. The CMS should be very light in core functionality (like Radiant, unlike Drupal/Joomla) but be easily extendable with plugins.

    Read the article

  • Search and replace hundreds of strings in tens of thousands of files?

    - by C Johnson
    I am looking into changing the file name of hundreds of files in a (C/C++) project that I work on. The problem is our software has tens of thousands of files that including (i.e. #include) these hundreds of files that will get changed. This looks like a maintenance nightmare. If I do this I will be stuck in Ultra-Edit for weeks, rolling hundreds of regex's by hand like so: ^\#include.*["<\\/]stupid_name.*$ with #include <dir/new_name.h> Such drudgery would be worse than peeling hundreds of potatoes in a sunken submarine in the antarctic with a spoon. I think it would rather be ideal to put the inputs and outputs into a table like so: stupid_name.h <-> <dir/new_name.h> stupid_nameb.h <-> <dir/new_nameb.h> stupid_namec.h <-> <dir/new_namec.h> and feed this into a regular expression engine / tool / app / etc... My Ultimate Question: Is there a tool that will do that? Bonus Question: Is it multi-threaded? I looked at quite a few search and replace topics here on this website, and found lots of standard queries that asked a variant of the following question: standard question: Replace one term in N files. as opposed to: my question: Replace N terms in N files. Thanks in advance for any replies.

    Read the article

  • compile Boost as static Universal binary lib

    - by Albert
    I want to have a static Universal binary lib of Boost. (Preferable the latest stable version, that is 1.43.0, or newer.) I found many Google hits with similar problems and possible solutions. However, most of them seems outdated. Also none of them really worked. Right now, I am trying sudo ./bjam --toolset=darwin --link=static --threading=multi \ --architecture=combined --address-model=32_64 \ --macosx-version=10.4 --macosx-version-min=10.4 \ install That compiles and install fine. However, the produced binaries seems broken. az@ip245 47 (openlierox) %file /usr/local/lib/libboost_signals.a /usr/local/lib/libboost_signals.a: current ar archive random library az@ip245 49 (openlierox) %lipo -info /usr/local/lib/libboost_signals.a input file /usr/local/lib/libboost_signals.a is not a fat file Non-fat file: /usr/local/lib/libboost_signals.a is architecture: x86_64 az@ip245 48 (openlierox) %otool -hv /usr/local/lib/libboost_signals.a Archive : /usr/local/lib/libboost_signals.a /usr/local/lib/libboost_signals.a(trackable.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1536 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(connection.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1776 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(named_slot_map.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1856 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(signal_base.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1776 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(slot.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1616 SUBSECTIONS_VIA_SYMBOLS Any suggestion how to get that correct?

    Read the article

  • Webcrawler, feedback?

    - by Jan Kuboschek
    Hey folks, every once in a while I have the need to automate data collection tasks from websites. Sometimes I need a bunch of URLs from a directory, sometimes I need an XML sitemap (yes, I know there is lots of software for that and online services). Anyways, as follow up to my previous question I've written a little webcrawler that can visit websites. Basic crawler class to easily and quickly interact with one website. Override "doAction(String URL, String content)" to process the content further (e.g. store it, parse it). Concept allows for multi-threading of crawlers. All class instances share processed and queued lists of links. Instead of keeping track of processed links and queued links within the object, a JDBC connection could be established to store links in a database. Currently limited to one website at a time, however, could be expanded upon by adding an externalLinks stack and adding to it as appropriate. JCrawler is intended to be used to quickly generate XML sitemaps or parse websites for your desired information. It's lightweight. Is this a good/decent way to write the crawler, provided the limitations above? http://pastebin.com/VtgC4qVE - Main.java http://pastebin.com/gF4sLHEW - JCrawler.java http://pastebin.com/VJ1grArt - HTMLUtils.java Thanks for your feedback in advance! :)

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Cross-Platform Language + GUI Toolkit for Prototyping Multimedia Applications

    - by msutherl
    I'm looking for a language + GUI toolkit for rapidly prototyping utility applications for multimedia installations. I've been working with Max/MSP/Jitter for many years, but I'd like to add a text-based language to my 'arsenal' for tasks apart from 'content production'. (When it comes to actual media synthesis, my choices are clear [SuperCollider + MSP for audio, Jitter + Quartz + openFrameworks for video]). I'm looking for something that maintains some of the advantages of Max, but is lower-level, faster, more cross-platfrom (Linux support), and text-based. Integration with powerful sound/video libraries is not a requirement. Some requirements: Cross-platform (at least OSX and Linux, Windows is a plus) Fast and easy cross-platform GUIs with no platform-specific modification GUI code separated from backend code as much as possible Good for interfacing with external serial devices (micro-controllers) Good network support (UDP/TCP) Good libraries for multi-media (video, sound, OSC) are a plus Asynchronous synchronous UNIX integration is a plus The options that come to mind: AS3/Flex (not a fan of AS3 or the idea of running in the Flash Player) openFrameworks (C++ framework, perhaps a bit too low level [looking for fast development time] and biased toward video work) Java w/ Processing libraries (like openFrameworks, just slower) Python + Qt (is Qt appropriate for rapid prototyping?) Python + Another GUI toolkit SuperCollider + Swing (yucky GUI development) Java w/ SWT Any other options? What do you recommend?

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

  • Problem when getting pageContent of an unavailable URL in Java

    - by tiendv
    I have a code for get pagecontent from a URL: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; public class GetPageFromURLAction extends Thread { public String stringPageContent; public String targerURL; public String getPageContent(String targetURL) throws IOException { String returnString=""; URL urlString = new URL(targetURL); URLConnection openConnection = urlString.openConnection(); String temp; BufferedReader in = new BufferedReader( newInputStreamReader(openConnection.getInputStream())); while ((temp = in.readLine()) != null) { returnString += temp + "\n"; } in.close(); // String nohtml = sb.toString().replaceAll("\\<.*?>",""); return returnString; } public String getStringPageContent() { return stringPageContent; } public void setStringPageContent(String stringPageContent) { this.stringPageContent = stringPageContent; } public String getTargerURL() { return targerURL; } public void setTargerURL(String targerURL) { this.targerURL = targerURL; } @Override public void run() { try { this.stringPageContent=this.getPageContent(targerURL); } catch (IOException e) { e.printStackTrace(); } } } Sometimes I receive an HTTP error of 405 or 403 and result string is null. I have tried checking permission to connect to the URL with: URLConnection openConnection = urlString.openConnection(); openConnection.getPermission() but it usualy returns null. Does mean that i don't have permission to access the link? I have tried stripping off the query portion of the URL with: String nohtml = sb.toString().replaceAll("\\<.*?>",""); where sb is a Stringbulder, but it doesn't seem to strip off the whole query substring. In an unrelated question, I'd like to use threads here because I must retrieve many URLs; how can I create a multi-thread client to improve the speed?

    Read the article

  • drupal form textfield #default_value not working

    - by alvin.ng
    I am working on a custom module with multi-page form on drupal 6. I found that #default_value is not working when my '#type' = 'textfield'. However, when '#type'='textarea', it displays correctly with the '#default_value' specified. Basically, I wrote a FormFactory to return different form definition ($form) based on the post parameter received. Initially, it returns the display of directories list, user then selects from radio buttos until a specific directory contains a xml file, it will become edit form. The edit form will have text fields display the data (#default_value) inside the xml file, however the type 'textarea' works here rather than 'textfield'. How can I make my '#default_value' work in this case? Below is the non-working field definition: $form['pageset']['newsTitle'] = array( '#type' => 'textfield', '#title' => 'News Title', '#default_value' => "{$element->newsTitle}", '#rows' => 1, '#required' => TRUE, ); Then I changed it to textarea as shown below to make it work: $form['pageset']['newsTitle'] = array( '#type' => 'textarea', '#title' => 'News Title', '#default_value' => "{$element->newsTitle}", '#rows' => 1, '#required' => TRUE, );

    Read the article

  • Handling data update/freshness issue in web-app in general (or GWT specifically)

    - by edwin.nathaniel
    In general, how do you guys handle user update/data freshness interaction with the user (UI issue) in web-apps? For example: Multi-users web-app (like project management) Login to a "virtual" space People can update project names, etc How to handle a situation such that: user-A and user-B load a project with title "Project StackOverflow" user-B updates the title to "Project StackExchange" user-A updates the title after user-B update operation to "Project Basecamp" The question I'm asking is from the user perspective (UI) and not about transactional operation. What do most people do in this situation? What would you do after user-B updates the title in user-A screen/view? What happened when user-A tries to update the title after user-B finished his/her update operation? do you inform user-A that the title has changed and he/she has to reload the page? do you go ahead and change the title and let user-B has old data? Do you do some sort of application-level "locking" mechanism? (if someone is updating, nobody else can?) Or fix the application workflow? (who has the access to be able to change things, etc). What would be the simplest solution, but at the same time not annoy the user with more dialog/warning messages. I've encountered this particular problem frequently in a GWT app specifically where domain models are being passed around and refreshing the whole app/client-side isn't the optimal solution in my mind (since it means the whole "loading"/initialization phase must be executed again in this specific environment). Maybe the answer is to stay away from GWT? :) Love to hear options, solutions, and advises from you guys. Thanks

    Read the article

  • Jquery set tr with empty td lower than with text in td

    - by PavelBY
    I have html, and jquery for sorting my table (also there is non-standart sorting (with multi-tbody)). My code could be found here: http://jsfiddle.net/GuRxj/ As you can see there, td with prices (on russian ????) are sorted ascending (but tech-as not!? why? (it a question too))... But as you see, i need to send this tr's with prices to top of this tbody (now there are in the bottom), while empty-price-tr send to bottom... How to do this? part of js: $('.prcol').click(function(e) { var $sort = this; var $table = $('#articles-table'); var $rows = $('tbody.analogs_art > tr',$table); $rows.sort(function(a, b){ var keyA = $('td:eq(3)',a).text(); var keyB = $('td:eq(3)',b).text(); if (keyA.length > 0 && keyB.length > 0) { if($($sort).hasClass('asc')){ console.log("bbb"); return (keyA > keyB) ? 1 : 0; } else { console.log(keyA+"-"+keyB); return (keyA > keyB) ? 1 : 0; } } }); $.each($rows, function(index, row){ //console.log(row); $table.append(row); //$("table.123").append(row); }); e.preventDefault(); });

    Read the article

  • Can you recommend a full-text search engine?

    - by Jen
    Can you recommend a full-text search engine? (Preferably open source) I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in my C++ desktop application. Hence, I’m looking for a fast full-text search solution to integrate with my app. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) Have a well-documented API To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Other points: I don't need multi-user support I don't need support for complex queries The database resides on the user's computer, so the indexing should be performed locally. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).

    Read the article

  • DB2 Driver Connection Hanging in Glassfish Connection Pool

    - by Ant
    We have an intermittent issue around the DB2 used from a Glassfish connection pool. What happens is this: Under situations where the database (DB2 on ZOS) is under stress, our application (which is a multi-threaded application using connections to DB2 via a Glassfish connection pool) stops doing anything. The following are observed: 1) Looking at the server using JConsole, we can see a thread waiting indefinitely in the DB2 driver's getConnection() method. We can also see that it has gained a lock on a Vector within the driver. Several other threads are also calling the getConnection() method in the driver, and are hanging waiting for the lock on the Vector to be released. 2) Looking at the database itself, we can see that there are connections from the Glassfish server open and waiting to be used. It seems that there is some sort of mismatch between the connection pool on Glassfish and the connections actually open to DB2. Has anyone come across this issue before? Or something similar? If you need any more information that I haven't provided, then please let me know!

    Read the article

  • Can I use POSIX signals in my Perl program to create event-driven programming?

    - by Shiftbit
    Is there any POSIX signals that I could utilize in my Perl program to create event-driven programming? Currently, I have multi-process program that is able to cross communicate but my parent thread is only able to listen to listen at one child at a time. foreach (@proc) { sysread(${$_}{'read'}, my $line, 100); #problem here chomp($line); print "Parent hears: $line\n"; } The problem is that the parent sits in a continual wait state until it receives it a signal from the first child before it can continue on. I am relying on 'pipe' for my intercommunication. My current solution is very similar to: http://stackoverflow.com/questions/2558098/how-can-i-use-pipe-to-facilitate-interprocess-communication-in-perl If possible I would like to rely on a $SIG{...} event or any non-CPAN solution. Update: As Jonathan Leffler mentioned, kill can be used to send a signal: kill USR1 = $$; # send myself a SIGUSR1 My solution will be to send a USR1 signal to my child process. This event tells the parent to listen to the particular child. child: kill USR1 => $parentPID if($customEvent); syswrite($parentPipe, $msg, $buffer); #select $parentPipe; print $parentPipe $msg; parent: $SIG{USR1} = { #get child pid? sysread($array[$pid]{'childPipe'}, $msg, $buffer); }; But how do I get my the source/child pid that signaled the parent? Have the child Identify itself in its message. What happens if two children signal USR1 at the same time?

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

  • Monotouch threads, GC, WCF

    - by cvista
    Hi This is a question about best practices i guess but it applies directly to my current MT project. I'm using WCF services to communicate with the server. To do this i do the following: services.MethodToCall(params); and the asynch: services.OnMethodToCallCompleted += delegate{ //do stuff and ting }; This can lead to issues if you're not careful in that variables defined within the scope of the asynch callback can sometimes be cleaned up by the gc and this can cause crashes. So - I am making it a practice to declare these outside of the scope of the callback unless I am 100% sure they are not needed. Now - when doing stuff and ting implies changing the ui - i wrap it all in an InvokeOnMainThread call. I guess wrapping everything in this would slow the main thread down and rubbish the point of having multi threads. Even though I'm being careful about all this i am still getting crashes and I have no idea why! I am certain it has something to do with threads, scope and all that. Now - the only thing I can think of outside of updating the UI that may need to happen inside of InvokeOnMainThread is that I have a singleton 'Database' class. This is based on the version 5 code from this thread http://www.yoda.arachsys.com/csharp/singleton.html So now if the service method returns data that needs to be added/updated to the Database class -I also wrap this inside an InvokeOnMainThread call. Still getting random crashes. So... My question is this: I am new to thick client dev - I'm coming from a web dev perspective where we don't need to worry about threads so much :) Aside from what I have mentioned -are there any other things I should be aware of? Is the above stuff correct? Or am i miss-understanding something? Cheers w://

    Read the article

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • How to use multifieldquery and filters in Lucene.net

    - by Khotu Nam
    I want to perform a multi field search on a lucene.net index but filter the results based on one of the fields. Here's what I'm currently doing: To index the fields the definitions are: doc.Add(new Field("id", id.ToString(), Field.Store.YES, Field.Index.UN_TOKENIZED)); doc.Add(new Field("title", title, Field.Store.NO, Field.Index.TOKENIZED)); doc.Add(new Field("summary", summary, Field.Store.NO, Field.Index.TOKENIZED, Field.TermVector.YES)); doc.Add(new Field("description", description, Field.Store.NO, Field.Index.TOKENIZED, Field.TermVector.YES)); doc.Add(new Field("distribution", distribution, Field.Store.NO, Field.Index.UN_TOKENIZED)); When I perform the search I do the following: MultiFieldQueryParser parser = new MultiFieldQueryParser(new string[]{"title", "summary", "description"}, analyzer); parser.SetDefaultOperator(QueryParser.Operator.AND); Query query = parser.Parse(text); BooleanQuery bq = new BooleanQuery(); TermQuery tq = new TermQuery(new Term("distribution", distribution)); bq.Add(tq, BooleanClause.Occur.MUST); Filter filter = new QueryFilter(bq); Hits hits = searcher.Search(query, filter); However, the result is always 0 hits. What am I doing wrong?

    Read the article

  • UTF8 charset, diacritical elements, conversion problems - and Zend Framework form escaping

    - by imanc
    Hey, I am writing a webapp in ZF and am having serious issues with UTF8. It's using multi lingual content through Zend Form and it seems that ZF heavily escapes all of these characters and basically just won't show a field if there's diacritical elements 'é' and if I use the HTML entity equivalent e.g. é it gets escaped so that the user will see 'é'. Zend Form allows for having non escaped data, but trying to use this is confusing, and it seems it'd need to be used all over the place. So, I have been told that if the page and the text is in UTF8, no conversion to htmlentities is required. Is this true? And if the last question is true, then how do I convert the source text to UTF8? I am comfortable setting up apache so that it sends a default UTF8 charset heading, and also adding the charset meta tag to the html, but doing this I am still getting messed up encoding. I have also tried opening the translation csv file in TextWrangler on OSX as UTF8, but it has done nothing. Thanks! L

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >