Search Results

Search found 6169 results on 247 pages for 'future proof'.

Page 196/247 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • How effective can this Tagging solution be?

    - by Pablo
    Im working on an image sharing site and want to implement tagging for the images. I've read Questions #20856 and #2504150 I have few concerns with the approach on the questions above. First of all it looks easy to link an image to a tag. However getting images by tag relation is not as easy. Not easy because you will have to get the image-to-tag relation from one table and then make a big query with a bunch of OR statements( one OR for every image). Before i even research the tagging topic i started testing the following method: This tables as examples: Table: Image Columns: ItemID, Title, Tags Table: Tag Columns: TagID, Name The Tags column in the Image table takes a string with multiple tagID from the Tag table enclosed by dashes(-). For example: -65-25-105- Links an image with the TagID 65,25 and 105. With this method i find it easier to get images by tag as i can get the TagID with one query and get all the images with another simple query like: SELECT * FROM Image WHERE Tags LIKE %-65-% So if i use this method for tagging, How effective this is? Is querying by LIKE %-65-% a slow process? What problems can i face in the future?

    Read the article

  • Where to Store the Protection Trial Info for Software Protection Purpose

    - by Peter Lee
    It might be duplicate with other questions, but I swear that I googled a lot and search at StackOverflow.com a lot, and I cannot find the answer to my question: In a C#.Net application, where to store the protection trial info, such as Expiration Date, Number of Used Times? I understand that, all kinds of Software Protection strategies can be cracked by a sophiscated hacker (because they can almost always get around the expiration checking step). But what I'm now going to do is just to protect it in a reasonable manner that a "common"/"advanced" user cannot screw it up. OK, in order to proof that I have googled and searched a lot at StackOverflow.com, I'm listing all the possible strategies I got: 1. Registry Entry First, some users might not have the access to even read the Registry table. Second, if we put the Protection Trial Info in a Registry Entry, the user can always find it out where it is by comparing the differences before and after the software installation. They can just simply change it. OK, you might say that we should encrypt the Protection Trial Info, yes we can do that. But what if the user just change their system date before installing? OK, you might say that we should also put a last-used date, if something is wrong, the last-used date could work as a protection guide. But what if the user just uninstall the software and delete all Registry Entries related to this software, and then reinstall the software? I have no idea on how to deal with this. Please help. A Plain File First, there are some places to put the plain file: 2.a) a simple XML file under software installation path 2.b) configuration file Again, the user can just uninstall the software and remove these plain file(s), and reinstall the software. - The Software Itself If we put the protection trial info (Expiration Date, we cannot put Number of Used Times) in the software itself, it is still susceptible to the cases I mentioned above. Furthermore, it's not even cool to do so. - A Trial Product-Key It works like a licensing process, that is, we put the Trial info into an RSA-signed string. However, it requires too many steps for a user to have a try of using the software (they might lose patience): 4.a) The user downloads the software; 4.b) The user sends an email to request a Trial Product-Key by providing user name (or email) or hardware info; 4.c) The server receives the request, RSA-signs it and send back to the user; 4.d) The user can now use it under the condition of (Expiration Date & Number of Used Times). Now, the server has a record of the user's username or hardware info, so the user will be rejected to request a second trial. Is it legal to collection hardware info? In a word, the user has to do one more extra step (request a Trial Product Key) just for having a try of using the software, which is not cool (thinking myself as a user). NOTE: This question is not about the Licensing, instead, it's about where to store the TRIAL info. After the trial expires, the user should ask for a license (CD-Key/Product-Key). I'm going to use RSA signature (bound to User Hardware)

    Read the article

  • adjacency list creation , out of Memory error

    - by p1
    Hello , I am trying to create an adjacency list to store a graph.The implementation works fine while storing 100,000 records. However,when I tried to store around 1million records I ran into OutofMemory Error : Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:3209) at java.lang.String.(String.java:215) at java.io.BufferedReader.readLine(BufferedReader.java:331) at java.io.BufferedReader.readLine(BufferedReader.java:362) at liarliar.main(liarliar.java:39) Following is my implementation HashMap<String,ArrayList<String>> adj = new HashMap<String,ArrayList<String>>(num); while ((str = in.readLine()) != null) { StringTokenizer Tok = new StringTokenizer(str); name = (String) Tok.nextElement(); cnt = Integer.valueOf(Tok.nextToken()); ArrayList<String> templist = new ArrayList<String>(cnt); while(cnt>0) { templist.add(in.readLine()); cnt--; } adj.put(name,templist); } //done creating a adjacency list I am wondering, if there is any better way to implement the adjacency list. Also, I know number of nodes right in the begining and , in the future I flatten the list as I visit nodes. Any suggestions ? Thanks

    Read the article

  • How to use iPhone SDK Private APIs

    - by eagle
    I haven't found a comprehensive list of the steps that are required to use a private API from the iPhone Library. In particular, I would like to know how to get header files, if they are even required, how to get it to compile (when I simply add the header, it complains that the functions aren't defined), and what resources one can use to learn about private APIs (e.g. from other user's experiences, such as http://iphonedevwiki.net/ which has a few). I've read in other places that people recommend using class-dump to get the headers. Are there any alternative methods? I've noticed that there are some repositories of iPhone Private SDKs, what are the most up to date resources you would recommend? Most of the previous questions about documentation of private APIs, have all linked to Erica Sadun's website, which doesn't seem to have documentation anymore (all the links on the left are crossed out). Please save the comments about not using private API's... I know of the biggest risks: App will get rejected by Apple. App will break in future updates to the OS. I'm talking about legitimate uses, such as: Private application use (e.g. for unit testing, or messing around to see what's possible)

    Read the article

  • WCF MessageHeaders in OperationContext.Current

    - by Nate Bross
    If I use code like this [just below] to add Message Headers to my OperationContext, will all future out-going messages contain that data on any new ClientProxy defined from the same "run" of my application? The objective, is to pass a parameter or two to each OpeartionContract w/out messing with the signature of the OperationContract, since the parameters being passed will be consistant for all requests for a given run of my client application. public void DoSomeStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOtherOperation(...); } In other words, is it safe to refactor the above code like this? bool isSetup = false; public void SetupMessageHeader() { if(isSetup) { return; } Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); isSetup = true; } public void DoSomeStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOtherOperation(...); } Since I don't really understand what's happening there, I don't want to cargo cult it and just change it and let it fly if it works, I'd like to hear your thoughts on if it is OK or not.

    Read the article

  • Long running stats process - thoughts on language choice?

    - by Josh
    I am on a LAMP stack for a website I am managing. There is a need to roll up usage statistics (a variety of things related to our desktop product), and I initially tackled the problem with PHP (being that I had a bunch of classes to work with the data already). All worked well on my dev box which was using 5.3 Long story short, 5.1 memory management seems to suck a lot worse, and I've had to do a lot of fooling to get the long term roll up scripts to run in a fixed memory space. Our server guys are unwilling to upgrade php at this time. I've since moved my dev server back to 5.1 so I don't run into this problem again... For mining of mysql databases to roll up statistics for different periods and resolutions, potentially running a process that does this all the time in the future (as opposed to on a cron schedule), what language choice do you recommend? I was looking at python (I know it more or less), java (don't know it that well), sticking it out with php (know it quite well). Thanks for any suggestions. Josh

    Read the article

  • Finding the last focused element with jQuery.

    - by Joshua Cody
    I'm looking to determine which element had the last focus in a series of inputs, that are added dynamically by the user. This code can only get the inputs that are available on page load: $('input.item').focus(function(){ $(this).siblings('ul').slideDown(); }); And this code sees all elements that have ever had focus: $('input.item').live('focus', function(){ $(this).siblings('ul').slideDown(); }); The HTML structure is this: <ul> <li><input class="item" name="goals[]"> <ul> <li>long list here</li> <li>long list here</li> <li>long list here</li> </ul></li> </ul> <a href="#" id="add">Add another</a> On page load, a single input loads. Then with each add another, a new copy of the top unordered list's contents are made and appended. And when each gets focus, I'd like to show the list beneath it. But I don't seem to be able to "watch for the most recently focused element, which exists now or in the future." Do I have some sort of fundamental assumption wrong?

    Read the article

  • how could installations/configurations be easier in linux?

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • NSIS takes ownership of IIS system files

    - by Lucas
    I recently encountered an issue with NSIS that I believe is related to an interaction with UAC, but I am at a loss to explain it and I do not know how to prevent it in the future. I have an installer that creates and removes IIS virtual directories using the NsisIIS plugin. The installer appeared worked correctly on my Windows 7 workstation. When the installer was run on a Windows 2008 R2 server it installed properly, but the uninstaller removed all of the virtual directories and put IIS is an unusable state; to the point that I had to remove the Default Web Site and re-add it. What I eventually found was that all of the IIS configuration files under C:\Windows\System32\inetsrv\config had a lock icon on them. Some investigation seem to indicate that this means a user account has taken ownership of the file, however all the files listed SYSTEM as the file owner. I did check a different server that I have not run the installer on, and it does not have the lock icon applied to the IIS files. I have also seen the same lock icon appear on other files that the NSIS installer creates. For instance, I have a Web.Config.tpl file that is processed using the NSIS ReplaceInFile which also appears with the lock icon after the installer finished. After I explicitly grant another user account access to the file, the lock icon goes away. I run the installer under the local Administrator account on the 2008 R2 server, so I do not get the UAC prompt. Here is the relevant code from the install.nsi file RequestExecutionLevel admin Section "Application" APP_SECTION SectionIn RO Call InstallApp SectionEnd Section "un.Uninstaller Section" Delete "$PROGRAMFILES\${PROGRAMFILESDIR}\Uninstall.exe" Call un.InstallApp SectionEnd Function InstallApp File /oname=Web.Config Web.Config.tpl !insertmacro ReplaceInFile Web.Config %CONNECTION_STRING% $CONNECTION_STRING FunctionEnd Function un.InstallApp ReadRegStr $0 HKLM "Software\${REGKEY}" "VirtualDir" NsisIIS::DeleteVDir "$0" Pop $0 FunctionEnd I have three questions stemming from this incident: How did this happen? How can I fix my installer to prevent it from happening again? How can I repair the permissions on the IIS config files.

    Read the article

  • PHP DOM vs SimpleXML for Atom GData feed parsing

    - by Geoff Adams
    I'm building a library to access the Google Analytics Data Export API. All the data the library accesses is in Atom format and utilises numerous different namespaces throughout. My experiments with the API have used SimpleXML for parsing so far, especially as all I have been doing is accessing the data held within the feed. Now I'm coming to write a library I am wondering whether forging ahead with SimpleXML will be adequate or whether the enhanced functionality of the DOM module in PHP would be of benefit in the future. I haven't written much code for this part of the library yet so the choice is still open. I have read that the PHP DOM module can be a better choice if you need to build an XML DOM on the fly or modify an existing one, but I'm not entirely sure I would need that functionality anyway due to the nature of the API (no pushing data to the server, for instance). SimpleXML is certainly easier to use and I have seen people saying that for read-only situations it is all you need. Essentially the question is, what would you use? Compatibility will not be an issue as the server configuration will match the application's requirements. Is it worth building the library with PHP DOM in mind or should I stick with SimpleXML for now? Update: Here are two examples of the kind of feeds I will be dealing with: Account feed Data feed

    Read the article

  • VS2010 / Target Framework = 3.5 / Building on Continuous Integration Server

    - by granadaCoder
    I'm checking into upgrading to VS2010. Our production servers only have 3.5 Framework and it will be 6-9 months before they are updated. We also have a Continuous Integration Server, running CruiseControl.NET (CC.NET). It has the 3.5 Framework on it as well. Our implementation of CC.NET mainly calls msbuild.exe MySolution.msbuild. (We encapsulate most of the build logic into .msbuild files fyi) Inside the .msbuild file, the following is the "Build" syntax: < Target Name="Build" DependsOnTargets="Checkout" < MSBuild Projects="$(WorkingCheckout)\MySolution.sln" Targets="Build" Properties="Configuration=$(Configuration)" < Output TaskParameter="TargetOutputs" ItemName="TargetOutputsItemName"< /Output < /MSBuild < /Target (A few spaces added to make it display here) =========== I know the VS2010 can "Target" the 3.5 Framework. My question is what happens when I have a VS2010 dev machine, and I check the VS2010 .sln and .csproj(s) files into source control (svn, btw).....will the CC.NET machine ~~which only have the 3.5 Framework installed on it........be able to build the .sln ? I guess I could test it, but the catch22 is that I don't have VS2010 (yet). So I'm asking before I try (the trial or a real install. ............. Any ideas what will happen? I guess the crux question is, what will happen. c:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe "MyVS2010SolutionFile.sln" ?? My hopeful goal would be, allow the developers to have VS2010 (now!), and it still be "ok" for the CC.NET machine and the Production Servers which will only have the 3.5 Framework on them for the foreseeable future. Just to be clear, developers NEVER create deployable builds. Only the CC.NET machine produces builds that will be pushed as production builds. Any help?

    Read the article

  • Getting started with character and text processing (encoding, regular expressions)

    - by TK
    I'd like to learn foundations of encodings, characters and text. Understanding these is important for dealing with a large set of text whether that are log files or text source for building algorithms for collective intelligence. My current knowledge is pretty basic: something like "As long as I use UTF-8, I'm okay." I don't say I need to learn about advanced topics right away. But I need to know: Bit and bytes level knowledge of encodings. Characters and alphabets not used in English. Multi-byte encodings. (I understand some Chinese and Japanese. And parsing them is important.) Regular expressions. Algorithm for text processing. Parsing natural languages. I also need an understanding of mathematics and corpus linguistics. The current and future web (semantic, intelligent, real-time web) needs processing, parsing and analyzing large text. I'm looking for some resources (maybe books?) that get me started with some of the bullets. (I find many helpful discussion on regular expressions here on Stack Overflow. So, you don't need to suggest resources on that topic.)

    Read the article

  • Where should I declare my CDI resources?

    - by Laird Nelson
    JSR-299 (CDI) introduces the (unfortunately named) concept of a resource: http://docs.jboss.org/weld/reference/1.0.0/en-US/html/resources.html#d0e4373 You can think of a resource in this nomenclature as a bridge between the Java EE 6 brand of dependency injection (@EJB, @Resource, @PersistenceContext and the like) and CDI's brand of dependency injection. The general gist seems to be that somewhere (and this will be the root of my question) you declare what amounts to a bridge class: it contains fields annotated both with Java EE's @EJB or @PersistenceContext or @Resource annotations and with CDI's @Produces annotations. The net effect is that Java EE 6 injects a persistence context, say, where it's called for, and CDI recognizes that injected PersistenceContext as a source for future injections down the line (handled by @Inject). My question is: what is the community's consensus--or is there one--on: what this bridge class should be named where this bridge class should live whether it's best to localize all this stuff into one class or make several of them ...? Left to my own devices, I was thinking of declaring a single class called CDIResources and using that as the One True Place to link Java EE's DI with CDI's DI. Many examples do something similar, but I'm not clear on whether they're "just" examples or whether that's a good way to do it. Thanks.

    Read the article

  • LaTeX: bibliography per chapter.

    - by YuppieNetworking
    Hello all, I am helping a colleague with his PhD thesis and we need to present the bibliography at the end of each chapter. The question is: does anyone have a minimal working example for this case using latex+bibtex? The current document structure that we use is the following: main.tex chap1.tex chap2.tex ... chapn.tex biblio.bib Where main.tex contains packages, document declarations, macros and \includes for each chapter. biblio.bib is the only bibtex file (I think is easier to have all citations in one place). We have searched and tried with different latex packages, reading and following their documentation. Specifically, bibitems and chapterbib. bibitems successfully generates bu*.aux files, but when running bibtex for each one of them, an error occurs since there is no \bibdata element in the .aux file. chapterbib also generates a .aux file, but bibtex finishes with an error caused by using multiple \bibliography{file} in the .tex files (one per chapter). Some coworkers suggested using a separate bibtex file for each chapter, which could be a problem of maintenance in the future when citing the same publications in different chapters. We will like to continue having this document structure, if possible. So, if anyone could shed some light to this problem, we will appreciate it. Thanks.

    Read the article

  • .NET Reference "Copy Local" True / Fasle Being Set Based on Contents of General Assembly

    - by D-Sect
    Hello All. First question for me here. We had a very interesting problem with a Win Forms project. It's been resolved. We know what happened, but we want to understand why it happened. This may help other people out in the future who have a similar problem. The WinForms project failed on 2 of our client's PCs. The error was an obscure kernel.dll error. The project ran fine on 3 other PCs. We found that a .DLL (log4net.dll - a very popular open-source logging library) was missing from our release folder. It was previously in our release folder. Why was it missing in this latest release? It was missing because I must have installed a program on my Dev box that used log4net.dll and it was added to the general assembly. When I checked the SLN's references for log4net.dll, they were changed to "copy local=FALSE". They must have changed automagicially because log4net.dll was present in my GAC. Here's where my question starts: Why did my reference for log4net.dll get changed from COPY LOCAL = TRUE to COPY LOCAL = FALSE? I suspect it's because it was added to my GAC by another program. How can we prevent this from happening again? As it stands now, if I install a piece of software that uses a common library and it adds it to my GAC, then my SLNs that REF that DLL will change from Copy Local TRUE to FALSE. Thank you so much. I hope this helps people out who have a piece of software that runs in some places, but not in others, when it used to run fine in ALL places.

    Read the article

  • Mark duplicates in MySql with php (without deleting)

    - by Adam
    So, I'm having some problems with a MySQL query (see other question), and decided to try a different approach. I have a database table with some duplicate rows, which I actually might need for future reference, so I don't want to remove. What I'm looking for is a way to display the data without those duplicates, but without removing them. I can't use a simple select query (as described in the other question). So what I need to do is write a code that does the following: 1. Go through my db Table. 2. Spot duplicates in the "ip" column. 3. Mark the first instance of each duplicate with "0" (in a column named "duplicate") and the rest with "1". This way I can later SELECT only the rows WHERE duplicate=0. NOTE: If your solution is related to the SELECT query, please read this other question first - there's a reason I'm not just using GROUP BY / DISTINCT. Thanks in advance.

    Read the article

  • Is there something like a "long running offline transaction" for NHibernate or any other ORM?

    - by Vilx-
    In essence this is a followup of this question. I'm beginning to feel that I should give up the whole idea, but I'll give it one more shot. What I want is pretty much like a DB transaction. It should track my changes to the DB and then in the end allow me to either commit or rollback them. If I insert an object, I should get it back in my next (appropriate) SELECT query. If I delete it, future SELECT queries should not return it. Etc. But there is one catch - this transaction would be very long running. It would start when the user opened a form (I'm talking about Windows Forms here), and the commit/rollback would be when the user closed it(with OK/Cancel). So it could take anywhere between seconds and days. This requirement rules out a standard DB transaction because that would lock the tables/rows it touched, and other users wouldn't be able to use the system. Also the transaction should not commit ANY changes to the DB until it was really committed. So if one user makes some changes, others don't see them until OK button is hit. This prevents errors in case the computer crashes or is disconnected from the network. I'm quite OK if the solution puts constraints on my model (I'm using MSSQL 2008, btw). I can design the DB/code any way I like. I'm also fine with the idea that a commit could fail because someone already modified one of the objects my transaction touched. Is there anything like this? I looked at NHibernate.Burrow, but I'm not sure that that's the thing I want. Added: It's the very beginning of the project so I'm not tied to NHibernate. I started out with it but I can still change easily.

    Read the article

  • Flushing a queue using AMQP, Rabbit, and Ruby

    - by jeffshantz
    I'm developing a system in Ruby that is going to make use of RabbitMQ to send messages to a queue as it does some work. I am using: Ruby 1.9.1 Stable RabbitMQ 1.7.2 AMQP gem v0.6.7 (http://github.com/tmm1/amqp) Most of the examples I have seen on this gem have their publish calls in an EM.add_periodic_timer block. This doesn't work for what I suspect is a vast majority of use cases, and certainly not for mine. I need to publish a message as I complete some work, so putting a publish statement in an add_periodic_timer block doesn't suffice. So, I'm trying to figure out how to publish a few messages to a queue, and then "flush" it, so that any messages I've published are then delivered to my subscribers. To give you an idea of what I mean, consider the following publisher code: #!/usr/bin/ruby require 'rubygems' require 'mq' MESSAGES = ["hello","goodbye","test"] AMQP.start do queue = MQ.queue('testq') messages_published = 0 while (messages_published < 50) if (rand() < 0.4) message = MESSAGES[rand(MESSAGES.size)] puts "#{Time.now.to_s}: Publishing: #{message}" queue.publish(message) messages_published += 1 end sleep(0.1) end AMQP.stop do EM.stop end end So, this code simply loops, publishing a message with 40% probability on each iteration of the loop, and then sleeps for 0.1 seconds. It does this until 50 messages have been published, and then stops AMQP. Of course, this is just a proof of concept. Now, my subscriber code: #!/usr/bin/ruby require 'rubygems' require 'mq' AMQP.start do queue = MQ.queue('testq') queue.subscribe do |header, msg| puts "#{Time.now.to_s}: Received #{msg}" end end So, we just subscribe to the queue, and for each message received, we print it out. Great, except that the subscriber only receives all 50 messages when the publisher calls AMQP.stop. Here's the output from my publisher. It has been truncated in the middle for brevity: $ ruby publisher.rb 2010-04-14 21:45:42 -0400: Publishing: test 2010-04-14 21:45:42 -0400: Publishing: hello 2010-04-14 21:45:42 -0400: Publishing: test 2010-04-14 21:45:43 -0400: Publishing: test 2010-04-14 21:45:44 -0400: Publishing: test 2010-04-14 21:45:44 -0400: Publishing: goodbye 2010-04-14 21:45:45 -0400: Publishing: goodbye 2010-04-14 21:45:45 -0400: Publishing: test 2010-04-14 21:45:45 -0400: Publishing: test . . . 2010-04-14 21:45:55 -0400: Publishing: test 2010-04-14 21:45:55 -0400: Publishing: test 2010-04-14 21:45:55 -0400: Publishing: test 2010-04-14 21:45:55 -0400: Publishing: goodbye Next, the output from my subscriber: $ ruby consumer.rb 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received hello 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received goodbye 2010-04-14 21:45:56 -0400: Received goodbye 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test . . . 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received goodbye If you note the timestamps in the output, the subscriber only receives all of the messages once the publisher has stopped AMQP and exited. So, being an AMQP newb, how can I get my messages to deliver immediately? I tried putting AMQP.start and AMQP.stop in the body of the while loop of the publisher, but then only the first message gets delivered -- though strangely, if I turn on logging, there are no error messages reported by the server and the messages do get sent to the queue, but never get received by the subscriber. Suggestions would be much appreciated. Thanks for reading.

    Read the article

  • Java RMI InitialContext: Equivalent of LocateRegistry.createRegistry(int) ?

    - by bguiz
    I am trying to some pretty basic RMI: // Context namingContext = new InitialContext(); Registry reg = LocateRegistry.createRegistry(9999); for ( int i = 0; i < objs.length; i++ ) { int id = objs[i].getID(); // namingContext.bind( "rmi:CustomObj" + id , objs[i] ); reg.bind( "CustomObj" + id , objs[i] ); } That works without a hitch, but for future purposes, I need to use InitialContext. Context namingContext = new InitialContext(); for ( int i = 0; i < objs.length; i++ ) { int id = objs[i].getID(); namingContext.bind( "rmi:CustomObj" + id , objs[i] ); } But I cannot get this to work. I have started rmiregistry from the command line. Is there an equivalent of LocateRegistry.createRegistry(int)? Or some other way to start the RMI registry / registry used by InitialContext from inside my class? (Instead of the command line) Stack trace: javax.naming.CommunicationException [Root exception is java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is: java.lang.ClassNotFoundException: bguiz.scratch.network.eg.Student] at com.sun.jndi.rmi.registry.RegistryContext.bind(RegistryContext.java:126) at com.sun.jndi.toolkit.url.GenericURLContext.bind(GenericURLContext.java:208) at javax.naming.InitialContext.bind(InitialContext.java:400) at bguiz.scratch.RMITest.main(RMITest.java:29) Caused by: java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is: java.lang.ClassNotFoundException: bguiz.scratch.CustomObj at sun.rmi.server.UnicastServerRef.oldDispatch(UnicastServerRef.java:396) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:250) ....(truncated) EDIT: I will delete my own question in a couple of days, as there seems to be no answer to this (I haven't been able to figure it out myself). Last call for any biters!

    Read the article

  • Modeling a Generic Relationship in a Database

    - by StevenH
    This is most likely one for all you sexy DBAs out there: How would I effieciently model a relational database whereby I have a field in an "Event" table which defines a "SportType". This "SportsType" field can hold a link to different sports tables E.g. "FootballEvent", "RubgyEvent", "CricketEvent" and "F1 Event". Each of these Sports tables have different fields specific to that sport. My goal is to be able to genericly add sports types in the future as required, yet hold sport specific event data (fields) as part of my Event Entity. Is it possible to use an ORM such as NHibernate / Entity framework which would reflect such a relationship? I have thrown together a quick C# example to express my intent at a higher level: public class Event<T> where T : new() { public T Fields { get; set; } public Event() { EventType = new T(); } } public class FootballEvent { public Team CompetitorA { get; set; } public Team CompetitorB { get; set; } } public class TennisEvent { public Player CompetitorA { get; set; } public Player CompetitorB { get; set; } } public class F1RacingEvent { public List<Player> Drivers { get; set; } public List<Team> Teams { get; set; } } public class Team { public IEnumerable<Player> Squad { get; set; } } public class Player { public string Name { get; set; } public DateTime DOB { get; set;} }

    Read the article

  • How important is PhD research topic to getting a job?

    - by thornate
    EDIT: This has been closed and I realise that I may not have been specific enough with the original title. I ask two questions here: The general one (Does a PhD help get a job?) which has been asked elsewhere, and the specific one (Is it possible to get work outside of the specific research field?). Assume I've already decided going to do the phd. I'm just stressing about the research topic. Well, I'm one year out of university (Mechatronics engineering and Software Eng double bachelors), worked for a few months then got retrenched (yay economy!). It's looking less and less likely that I'll get a job worth having with the job market as it is, so I'm thinking about going back to uni to do a PhD. I figure that by the time I'm done, the job market will have improved and hopefully I'll have something on my resume that is more attractive than spending three years doing customer support for accounting software. So, my question is to people who've done PhD's. Would you say that they were worth the effort? How important is the research topic to future job-seeking success? The idea I have is a computer-sciencey/neural-networks/data-mining thing which I think is very interesting, but not a field I want to be in forever. My potential supervisor claims that employers don't care so much about the topic of the research but rather the peripheral skills that are developed through a PhD; time managment, self-restraint, planning and whatnot. How does this mesh with people's real world experience? I'd appreciate any advice before signing my life on the line for the next three years. See also: Should developers go to grad school? Best reason not to hire a PhD? How to find an entry-level job after you already have a graduate degree?

    Read the article

  • Google App Engine - Secure Cookies

    - by tponthieux
    I'd been searching for a way to do cookie based authentication/sessions in Google App Engine because I don't like the idea of memcache based sessions, and I also don't like the idea of forcing users to create google accounts just to use a website. I stumbled across someone's posting that mentioned some signed cookie functions from the Tornado framework and it looks like what I need. What I have in mind is storing a user's id in a tamper proof cookie, and maybe using a decorator for the request handlers to test the authentication status of the user, and as a side benefit the user id will be available to the request handler for datastore work and such. The concept would be similar to forms authentication in ASP.NET. This code comes from the web.py module of the Tornado framework. According to the docstrings, it "Signs and timestamps a cookie so it cannot be forged" and "Returns the given signed cookie if it validates, or None." I've tried to use it in an App Engine Project, but I don't understand the nuances of trying to get these methods to work in the context of the request handler. Can someone show me the right way to do this without losing the functionality that the FriendFeed developers put into it? The set_secure_cookie, and get_secure_cookie portions are the most important, but it would be nice to be able to use the other methods as well. #!/usr/bin/env python import Cookie import base64 import time import hashlib import hmac import datetime import re import calendar import email.utils import logging def _utf8(s): if isinstance(s, unicode): return s.encode("utf-8") assert isinstance(s, str) return s def _unicode(s): if isinstance(s, str): try: return s.decode("utf-8") except UnicodeDecodeError: raise HTTPError(400, "Non-utf8 argument") assert isinstance(s, unicode) return s def _time_independent_equals(a, b): if len(a) != len(b): return False result = 0 for x, y in zip(a, b): result |= ord(x) ^ ord(y) return result == 0 def cookies(self): """A dictionary of Cookie.Morsel objects.""" if not hasattr(self,"_cookies"): self._cookies = Cookie.BaseCookie() if "Cookie" in self.request.headers: try: self._cookies.load(self.request.headers["Cookie"]) except: self.clear_all_cookies() return self._cookies def _cookie_signature(self,*parts): self.require_setting("cookie_secret","secure cookies") hash = hmac.new(self.application.settings["cookie_secret"], digestmod=hashlib.sha1) for part in parts:hash.update(part) return hash.hexdigest() def get_cookie(self,name,default=None): """Gets the value of the cookie with the given name,else default.""" if name in self.cookies: return self.cookies[name].value return default def set_cookie(self,name,value,domain=None,expires=None,path="/", expires_days=None): """Sets the given cookie name/value with the given options.""" name = _utf8(name) value = _utf8(value) if re.search(r"[\x00-\x20]",name + value): # Don't let us accidentally inject bad stuff raise ValueError("Invalid cookie %r:%r" % (name,value)) if not hasattr(self,"_new_cookies"): self._new_cookies = [] new_cookie = Cookie.BaseCookie() self._new_cookies.append(new_cookie) new_cookie[name] = value if domain: new_cookie[name]["domain"] = domain if expires_days is not None and not expires: expires = datetime.datetime.utcnow() + datetime.timedelta( days=expires_days) if expires: timestamp = calendar.timegm(expires.utctimetuple()) new_cookie[name]["expires"] = email.utils.formatdate( timestamp,localtime=False,usegmt=True) if path: new_cookie[name]["path"] = path def clear_cookie(self,name,path="/",domain=None): """Deletes the cookie with the given name.""" expires = datetime.datetime.utcnow() - datetime.timedelta(days=365) self.set_cookie(name,value="",path=path,expires=expires, domain=domain) def clear_all_cookies(self): """Deletes all the cookies the user sent with this request.""" for name in self.cookies.iterkeys(): self.clear_cookie(name) def set_secure_cookie(self,name,value,expires_days=30,**kwargs): """Signs and timestamps a cookie so it cannot be forged""" timestamp = str(int(time.time())) value = base64.b64encode(value) signature = self._cookie_signature(name,value,timestamp) value = "|".join([value,timestamp,signature]) self.set_cookie(name,value,expires_days=expires_days,**kwargs) def get_secure_cookie(self,name,include_name=True,value=None): """Returns the given signed cookie if it validates,or None""" if value is None:value = self.get_cookie(name) if not value:return None parts = value.split("|") if len(parts) != 3:return None if include_name: signature = self._cookie_signature(name,parts[0],parts[1]) else: signature = self._cookie_signature(parts[0],parts[1]) if not _time_independent_equals(parts[2],signature): logging.warning("Invalid cookie signature %r",value) return None timestamp = int(parts[1]) if timestamp < time.time() - 31 * 86400: logging.warning("Expired cookie %r",value) return None try: return base64.b64decode(parts[0]) except: return None uid=1234|1234567890|d32b9e9c67274fa062e2599fd659cc14 Parts: 1. uid is the name of the key 2. 1234 is your value in clear 3. 1234567890 is the timestamp 4. d32b9e9c67274fa062e2599fd659cc14 is the signature made from the value and the timestamp

    Read the article

  • Seperation of notification confirmation+storage and handling notification asp.net mvc

    - by bastijn
    After a payment from my web application to a 3rd party the 3rd party sends, next to the direct confirmation message, a notification message. This notification message is stored in my database for future use and I have to send a notification confirmed back. For this purpose I currently use a: return Content("received") Which is standard protocol for the service. Currently, I process the incoming notification by first storing it, than handling it (updating account credits etc in my application) and in the end sending a response. This all works well. But I want to seperate handling the notification and storing+responding to the webservice. The problem is that the "return Content()" is ending my controller method and therefore I cannot simply first send the confirmation message back to the webservice and than call my handle_Notification() method. So the solution would be to replace the return Content() part with something equal which doesn't involve a "return", is this possible, as I do not now the complete URL calling I cannot easily create a simple HTTP POST web request (I tried, might have made an error but did not work). Another solution would be to have some kind of timer or listener which either periodically checks for new notifications in the Database which have to be handled or a listener listening to DB new notifications or something. What is the standard procedure on this, if any?

    Read the article

  • To (monkey)patch or not to (monkey)patch, that is the question

    - by gsakkis
    I was talking to a colleague about one rather unexpected/undesired behavior of some package we use. Although there is an easy fix (or at least workaround) on our end without any apparent side effect, he strongly suggested extending the relevant code by hard patching it and posting the patch upstream, hopefully to be accepted at some point in the future. In fact we maintain patches against specific versions of several packages that are applied automatically on each new build. The main argument is that this is the right thing to do, as opposed to an "ugly" workaround or a fragile monkey patch. On the other hand, I favor practicality over purity and my general rule of thumb is that "no patch" "monkey patch" "hard patch", at least for anything other than a (critical) bug fix. So I'm wondering if there is a consensus on when it's better to (hard) patch, monkey patch or just try to work around a third party package that doesn't do exactly what one would like. Does it have mainly to do with the reason for the patch (e.g. fixing a bug, modifying behavior, adding missing feature), the given package (size, complexity, maturity, developer responsiveness), something else or there are no general rules and one should decide on a case-by-case basis ?

    Read the article

  • Spell checker software

    - by Naren
    Hello Guys, I have been assigned a task to find a decent spell checker (UK English) preferably the free one for a project that we are doing. I have looked at Google AJAX API for this. The project contains some young person's (kids less than 18 years old) data which shouldn't allow exposing or storing outside the application boundaries. Google logs the data for research purpose that means Google owns the data whatever we send over the wire through Google API. Is this right? I fired an email to Google regarding the privacy of data and storage but they haven't come back. If you have some knowledge regarding this please share with me. At this point our servers might not have access to external entities that means we might not be able to use Web API for this over the wire. But it may change in the future. That means I have to find out some spell checker alternatives that can sit in our environment and do the job or an external APIs. Would you mind share your findings and knowledge in this regard. I would prefer free services but never know if you have some cracking spell checker for a few quid’s then I don't mind recommending to the project board. Technology using ASP.NET 3.5/4.0, MVC, jQuery, SQL Sever 2008 etc Cheers, Naren

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >