Search Results

Search found 19299 results on 772 pages for 'off topic'.

Page 463/772 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • Optimized Publish/Subcribe JMS Broker Cluster and Conflicting Posts on StackOverFlow for the Answer

    - by Gene
    Hi, I am looking to build a publish/subscribe distributed messaging framework that can manage huge volumes of message traffic with some intelligence at the broker level. I don't know if there's a topology that describes this, but this is the model I'm going after: EXAMPLE MODEL A A) There are two running message brokers (ideally all on localhost if possible, for easier demo-ing) : Broker-A Broker-B B) Each broker will have 2 listeners and 1 publisher. Example Figure [subscriber A1, subscriber A2, publisher A1] <-- BrokerA <-- BrokerB <-- [publisher B1, subscriber B1, subscriber B2] IF a message-X is published to broker A and there no subscribers for it among the listeners on Broker-B (via criteria in Message Selectors or Broker routing rules), then that message-X will never be published to Broker-B. ELSE, broker A will publish the message to broker B, where one of the broker B listeners/subscribers/services is expecting that message based on the subscription criteria. Is Clustering the Correct Approach? At first, I concluded that the "Broker Clustering" concept is what I needed to support this. However, as I have come to understand it, the typical use of clustering entails either: message redundancy across all brokers ... or Competing Consumers pattern ... and neither of these satisfy the requirement in the EXAMPLE MODEL A. What is the Correct Approach? My question is, does anyone know of a JMS implementation that supports the model I described? I scanned through all the stackoverflow post titles for the search: JMS and Cluster. I found these two informative, but seemingly conflicting posts: Says the EXAMPLE MODEL A is/should-be implicitly supported: http://stackoverflow.com/questions/2255816/jms-consumer-with-activemq-network-of-brokers " this means you pick a broker, connect to it, and let the broker network sort it out amongst themselves. In theory." Says the EXAMPLE MODEL A IS NOT suported: http://stackoverflow.com/questions/2017520/how-does-a-jms-topic-subscriber-in-a-clustered-application-server-recieve-message "All the instances of PropertiesSubscriber running on different app servers WILL get that message." Any suggestions would be greatly appreciated. Thanks very much for reading my post, Gene

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • Vimeo Desktop App OAuth

    - by Barry
    Hi Guys, I'm currently having massive trouble with Vimeo's Oauth implementation and my desktop app. My program does the following correctly. 1- Requests a Unauthorized Request Token with my key and secret and returns - a Token and a Token secret. 2- Generates a URL for the user to go to using the token which then shows our application's name and allows the user to Authorize us to use his/her account. It then shows a verifier which the user returns and puts into our app. The problem is the third step and actually exchanging the tokens for the access tokens. Basically every time we try and get them we get a "Invalid / expired token - The oauth_token passed was either not valid or has expired" I looked at the documentation and there's supposed to be a callback to a server when deployed like that which gives the user an "authorized token" but as im developing a desktop app we can't do this. So I assume the token retrieved in 1 is valid for this step. (actually it seems it is: http://vimeo.com/forums/topic:22605) So I'm wondering now am I missing something here on my actual vimeo application account now? is it treating it as a web hosted app with callbacks? all the elements are there for this to work and I've used this same component to create a twitter Oauth login in exactly the same way and it was fine. Thanks in advance, Barry

    Read the article

  • Processing a log to fix a malformed IP address ?.?.?.x

    - by skymook
    I would like to replace the first character 'x' with the number '7' on every line of a log file using a shell script. Example of the log file: 216.129.119.x [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.x [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.x [01/Mar/2010:00:27:10 +0100] "GET /etc/.... My humble beginnings... #!/bin/bash echo Starting script... cd /Users/me/logs/ gzip -d /Users/me/logs/access.log.gz echo Files unzipped... echo I'm totally lost here to process the log file and save it back to hd... exit 0 Why is the log file IP malformed like this? My web provider (1and1) has decide not to store IP address, so they have replaced the last number with the character 'x'. They told me it was a new requirement by 'law'. I personally think that is bs, but that would take us off topic. I want to process these log files with AWstats, so I need an IP address that is not malformed. I want to replace the x with a 7, like so: 216.129.119.7 [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.7 [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.7 [01/Mar/2010:00:27:10 +0100] "GET /etc/.... Not perfect I know, but least I can process the files, and I can still gain a lot of useful information like country, number of visitors, etc. The log files are 200MB each, so I thought that a shell script is the way to go because I can do that rapidly on my Macbook Pro locally. Unfortunately, I know very little about shell scripting, and my javascript skills are not going to cut it this time. I appreciate your help.

    Read the article

  • Program Structure Design Tools? (Top Down Design)

    - by Lee Olayvar
    I have been looking to expand my methodologies to better involve Unit testing, and i stumbled upon Behavioral Driven Design (Namely Cucumber, and a few others). I am quite intrigued by the concept as i have never been able to properly design top down, only because keeping track of the design gets lost without a decent way to record it. So on that note, in a mostly language agnostic way, are there any useful tools out there i am (probably) unaware of? Eg, i have often been tempted to try building flow charts for my programs, but i am not sure how much that will help, and it seems a bit confusing to me how i could make a complex enough flow chart to handle the logic of a full program, and all its features.. ie, it just seems like flow charts would be limiting in the design scheme.. or possibly grow to an unmaintainable scale. BDD methods are nice, but with a system that is so tied to structure, tying into the language and unit testing seems like a must (for it to be worth it) and it seems to be hard to find something to work well with both Python and Java (my two main languages). So anyway.. on that note, any comments are much appreciated. I have searched around on here and it seems like top down design is a well discussed topic, but i haven't seen too much reference to tools themselves, eg, flow chart programs, etc. I am on Linux, if it matters (in the case of programs).

    Read the article

  • How you would you describe the Observer pattern in beginner language?

    - by Sheldon
    Currently, my level of understanding is below all the coding examples on the web about the Observer Pattern. I understand it simply as being almost a subscription that updates all other events when a change is made that the delegate registers. However, I'm very unstable in my true comprehension of the benefits and uses. I've done some googling, but most are above my level of understanding. I'm trying to implement this pattern with my current homework assignment, and to truly make sense on my project need a better understanding of the pattern itself and perhaps an example to see what its use. I don't want to force this pattern into something just to submit, I need to understand the purpose and develop my methods accordingly so that it actually serves a good purpose. My text doesn't really go into it, just mentions it in one sentence. MSDN was hard for me to understand, as I'm a beginner on this, and it seems more of an advanced topic. How would you describe this Observer pattern and its uses in C# to a beginner? For an example, please keep code very simple so I can understand the purpose more than complex code snippets. I'm trying to use it effectively with some simple textbox string manipulations and using delegates for my assignment, so a pointer would help!

    Read the article

  • Trace PRISM / CAL events (best practice?)

    - by Christian
    Ok, this question is for people with either a deep knowledge of PRISM or some magic skills I just lack (yet). The Background is simple: Prism allows the declaration of events to which the user can subscribe or publish. In code this looks like this: _eventAggregator.GetEvent<LayoutChangedEvent>().Subscribe(UpdateUi, true); _eventAggregator.GetEvent<LayoutChangedEvent>().Publish("Some argument"); Now this is nice, especially because these events are strongly typed, and the declaration is a piece of cake: public class LayoutChangedEvent : CompositePresentationEvent<string> { } But now comes the hard part: I want to trace events in some way. I had the idea to subscribe using a lambda expression calling a simple log message. Worked perfectly in WPF, but in Silverlight there is some method access error (took me some time to figure out the reason).. If you want to see for yourself, try this in Silverlight: eA.GetEvent<VideoStartedEvent>().Subscribe(obj => TraceEvent(obj, "vSe", log)); If this would be possible, I would be happy, because I could easily trace all events using a single line to subscribe. But it does not... The alternative approach is writing a different functions for each event, and assign this function to the events. Why different functions? Well, I need to know WHICH event was published. If I use the same function for two different events I only get the payload as argument. I have now way to figure out which event caused the tracing message. I tried: using Reflection to get the causing event (not working) using a constructor in the event to enable each event to trace itself (not allowed) Any other ideas? Chris PS: Writing this text took me most likely longer than writing 20 functions for my 20 events, but I refuse to give up :-) I just had the idea to use postsharp, that would most likely work (although I am not sure, perhaps I end up having only information about the base class).. Tricky and so unimportant topic...

    Read the article

  • JScrollPane Scrolls Down with Long Text in JEditorPane

    - by Jim
    Hello, I want to have a JEditorPane inside a JScrollPane. When the user clicks a button, the click listener will create a textEditor, call jscrollpane.setViewPort(textEditor), call textEditor.setText(String) to fill it with editable text, and call jscrollpane.getVerticalScrollBar().setValue(0). In case you're wondering, yes, the setText() must come after the setViewPort() for reasons that aren't on topic. Here is the problem: After the user clicks the button, the JScrollPane's view scrolls all the way to the bottom. I want the scrollbar to be at the top, as per the last line in my click listener. I popped open a debugger, and to my horror, discovered that the jscrollpane's viewport is being forced down to the bottom after the conclusion of the click listener (when pumping filters). It appears that Swing is delaying the population of the editor/jscrollpane until after the conclusion of the clicklistener, but is calling the scrollbar command first. Thus, the undesired behavior. Anyway, I'm wondering if there is a clean solution. It seems that wanting a scrollpane to be scrolled to the top after modification would be a reasonably common requirement, so I'm assuming this is a well-solved problem. Thanks!

    Read the article

  • Is it possible to store controls(Panel) as object, serialize it and store it as a file?

    - by ikky
    The topic says it all. Using Compact Framework C# I'm tiling (order/sequence is important) some images that i download from an url, into a Panel(each image is a PictureBox). This can be a huge process, and may take some time. Therefor i only want the user to download the images and tile them once. So the next time the user uses the Tile Application, the Panel that was created the first time is already stored in a file and is loaded from that file. So what i want is a method to store a Panel as a file. Is this possible, or do you think i should do it another way? I've tried something like this: BinaryWriter panelStorage = new BinaryWriter(new FileStream("imagePanel.panel", FileMode.OpenOrCreate, FileAccess.Write, FileShare.None)); Byte[] bImageObject = new Byte[20000]; bImageObject = (byte[])(object)this.imagePanel; panelStorage .Write(bMapObject); panelStorage .Close(); But the casting was not very legal :P "InvalidCastException" Can anyone help me with this problem? Thank you in advance!

    Read the article

  • Bison input analyzer - basic question on optional grammer and input interpretation

    - by kumar_m_kiran
    Hi All, I am very new to Flex/Bison, So it is very navie question. Pardon me if so. May look like homework question - but I need to implement project based on below concept. My question is related to two parts, Question 1 In Bison parser, How do I provide rules for optional input. Like, I need to parse the statment Example : -country='USA' -state='INDIANA' -population='100' -ratio='0.5' -comment='Census study for Indiana' Here the ratio token can be optional. Similarly, If I have many tokens optional, then How do I provide the grammer in the parser for the same? My code looks like, %start program program : TK_COUNTRY TK_IDENTIFIER TK_STATE TK_IDENTIFIER TK_POPULATION TK_IDENTIFIER ... where all the tokens are defined in the lexer. Since there are many tokens which are optional, If I use "|" then there will be many different ways of input combination possible. Question 2 There are good chance that the comment might have quotes as part of the input, so I have added a token -tag which user can provide to interpret the same, Example : -country='USA' -state='INDIANA' -population='100' -ratio='0.5' -comment='Census study for Indiana$'s population' -tag=$ Now, I need to reinterpret Indiana$'s as Indiana's since -tag=$. Please provide any input or related material for to understand these topic. Thanks for your input in advance.

    Read the article

  • Investigating the root cause behind SharePoint's "request not found in the TrackedRequests"

    - by Muhimbi
    We have a long standing issue in our bug tracking system about the dreaded "ERROR: request not found in the TrackedRequests. We might be creating and closing webs on different threads." message in SharePoint's trace log. As we develop Workflow software for the SharePoint market, we look into this issue from time to time to make sure it is not caused by our products. I have personally come to the conclusion that this is a problem in SharePoint, but perhaps someone else can prove me wrong. Here is what I know: According to the hundreds of search results returned by Google on this topic, this issue appears to be mainly related to SharePoint Workflows, both SharePoint Designer and Visual Studio based workflows. Assuming ULS logging is set to Monitorable, the easiest way to reproduce this problem is to create a new SharePoint Designer Workflow, attach it to a document library, set it to auto start on add/update, don't add any actions, save the workflow and upload a file to the document library. The error is only visible in the SharePoint trace log, it does not appear to impact the execution of the workflow at hand. I have verified that the problem occurs on 32 bit as well as 64 bit systems, Win2K3 and 2K8, WSS and MOSS with SharePoint versions up to the December 2009 Cumulative Update (6524). The problem does not occur when a workflow is started manually. There are dozens of related posts on MSDN Forums, hundreds on Google, one on StackOverflow and none on SharePoint Overflow. There appears to be no answer. Does anyone have any idea about what is going on, what is causing this and if we should worry or file this under 'Red Herrings'.

    Read the article

  • OpenCV to JNI how to make it work?

    - by user293252
    I am tring to use opencv and java for face detection, and in that pursit i found this "JNI2OPENCV" file....but i am confused on how to make it work, can anyone help me? http://img519.imageshack.us/img519/4803/askaj.jpg and the following is the FaceDetection.java class JNIOpenCV { static { System.loadLibrary("JNI2OpenCV"); } public native int[] detectFace(int minFaceWidth, int minFaceHeight, String cascade, String filename); } public class FaceDetection { private JNIOpenCV myJNIOpenCV; private FaceDetection myFaceDetection; public FaceDetection() { myJNIOpenCV = new JNIOpenCV(); String filename = "lena.jpg"; String cascade = "haarcascade_frontalface_alt.xml"; int[] detectedFaces = myJNIOpenCV.detectFace(40, 40, cascade, filename); int numFaces = detectedFaces.length / 4; System.out.println("numFaces = " + numFaces); for (int i = 0; i < numFaces; i++) { System.out.println("Face " + i + ": " + detectedFaces[4 * i + 0] + " " + detectedFaces[4 * i + 1] + " " + detectedFaces[4 * i + 2] + " " + detectedFaces[4 * i + 3]); } } public static void main(String args[]) { FaceDetection myFaceDetection = new FaceDetection(); } } CAn anyone tell me how can i make this work on Netbeans?? I tried Google but help on this particular topic is very meger. I have added the whole folder as Llibrary in netbeans project and whe i try to run the file i get the followig wrroes. Exception in thread "main" java.lang.UnsatisfiedLinkError: FaceDetection.JNIOpenCV.detectFace(IILjava/lang/String;Ljava/lang/String;)[I at FaceDetection.JNIOpenCV.detectFace(Native Method) at FaceDetection.FaceDetection.<init>(FaceDetection.java:19) at FaceDetection.FaceDetection.main(FaceDetection.java:29) Java Result: 1 BUILD SUCCESSFUL (total time: 2 seconds) CAn anyone tell me the exact way to work with this? like what all i have to do?

    Read the article

  • How to implement Master-Detail with Multi-Selection in WPF?

    - by gehho
    Hi, I plan to create a typical Master-Detail scenario, i.e. a collection of items displayed in a ListView via DataBinding to an ICollectionView, and details about the selected item in a separate group of controls (TextBoxes, NumUpDowns...). No problem so far, actually I have already implemented a pretty similar scenario in an older project. However, it should be possible to select multiple items in the ListView and get the appropriate shared values displayed in the detail view. This means, if all selected items have the same value for a property, this value should be displayed in the detail view. If they do not share the same value, the corresponding control should provide some visual clue for the user indicating this, and no value should be displayed (or an "undefined" state in a CheckBox for example). Now, if the user edits the value, this change should be applied to all selected items. Further requirements are: MVVM compatibility (i.e. not too much code-behind) Extendability (new properties/types can be added later on) Does anyone have experience with such a scenario? Actually, I think this should be a very common scenario. However, I could not find any details on that topic anywhere. Thanks! gehho. PS: In the older project mentioned above, I had a solution using a subclass of the ViewModel which handles the special case of multi-selection. It checked all selected items for equality and returned the appropriate values. However, this approach had some drawbacks and somehow seemed like a hack because (besides other smelly things) it was necessary to break the synchronization between the ListView and the detail view and handle it manually.

    Read the article

  • PHP, MySQL: Receive email, auto search in DB & send email based on the results

    - by Devner
    Hi all, Visitors can contact staff by means of contact form (visitor needs to submit email as well). This will be stored in DB. Now considering that staff responds to this message, the reply from the staff would be sent to the visitors email directly. Say if the user wants to follow up on the message sent by the staff, I would like the visitor to just hit the reply button in his email service & send me his questions on the same topic, but just retain the ID in the Subject line. So when the visitor send this email, I would like to receive the email & at the same time, try to search in my DB if the ID that is present in the email subject, actually exists in the system. If yes, that would be sent back to the same staff member who handled the response previously or it would be assigned to a new staff member. That being said, I was thinking of how to do this. The part where I am really held up is when the staff receives the actual email from the visitors email, how can I check the DB? Say I am/staff is receiving emails at [email protected]. When visitor sends reply email, then it would be sent to [email protected]. How can I check to see if the ID in the subject line of the email that I received at [email protected], actually exists in my DB in my website? This is where I am really stuck. Looking forward for your replies. Thank you.

    Read the article

  • Help me create a Firefox extension (Javascript XPCOM Component)

    - by Johnny Grass
    I've been looking at different tutorials and I know I'm close but I'm getting lost in implementation details because some of them are a little bit dated and a few things have changed since Firefox 3. I have already written the javascript for the firefox extension, now I need to make it into an XPCOM component. This is the functionality that I need: My Javascript file is simple, I have two functions startServer() and stopServer. I need to run startServer() when the browser starts and stopServer() when firefox quits. Edit: I've updated my code with a working solution (thanks to Neil). The following is in MyExtension/components/myextension.js. Components.utils.import("resource://gre/modules/XPCOMUtils.jsm"); const CI = Components.interfaces, CC = Components.classes, CR = Components.results; // class declaration function MyExtension() {} MyExtension.prototype = { classDescription: "My Firefox Extension", classID: Components.ID("{xxxx-xxxx-xxx-xxxxx}"), contractID: "@example.com/MyExtension;1", QueryInterface: XPCOMUtils.generateQI([CI.nsIObserver]), // add to category manager _xpcom_categories: [{ category: "profile-after-change" }], // start socket server startServer: function () { /* socket initialization code */ }, // stop socket server stopServer: function () { /* stop server */ }, observe: function(aSubject, aTopic, aData) { var obs = CC["@mozilla.org/observer-service;1"].getService(CI.nsIObserverService); switch (aTopic) { case "quit-application": this.stopServer(); obs.removeObserver(this, "quit-application"); break; case "profile-after-change": this.startServer(); obs.addObserver(this, "quit-application", false); break; default: throw Components.Exception("Unknown topic: " + aTopic); } } }; var components = [MyExtension]; function NSGetModule(compMgr, fileSpec) { return XPCOMUtils.generateModule(components); }

    Read the article

  • Using SDL Tridion 2011 Core Service to create Components programatically

    - by user1428019
    I have seen some of the questions/answers related to this topic here, however still I am not getting the suggestion which I want. So I am posting my question again here, and I would be thankful for your valuable time and answers. I would like to create “Component, Page, SG, Publication, Folders “ via programmatically in SDL Tridion Content Manager, and later on, I would like to add programmatically created components in Page and attach CT,PT for that page, and finally would like to publish the page programmatically. I have done these all the activities in SDL Tridion 2009 using TOM API (Interop DLL's), and I tried these activities in SDL Tridion 2011 using TOM.Net API. It was not working and later on I came to know that, TOM.Net API will not support these kinds of works and it is specifically only for Templates and Event System. And finally I came to know I have to go for Core services to do these kinds of stuffs. My Questions: When I create console application to create component programmatically using core service, what are the DLL’s I have to add as reference? Earlier, I have created the exe and ran in the TCM server, the exe created all the stuffs, can I used the same approach using core services too? Will it work? Is BC still available or Core Service replaced BC? (BC-Business Connector) Can anyone send some code snippet to create Component/Page (complete class file will be helpful to understand better) Thanks Jai

    Read the article

  • What is the correct way to implement a massive hierarchical, geographical search for news?

    - by Philip Brocoum
    The company I work for is in the business of sending press releases. We want to make it possible for interested parties to search for press releases based on a number of criteria, the most important being location. For example, someone might search for all news sent to New York City, Massachusetts, or ZIP code 89134, sent from a governmental institution, under the topic of "traffic". Or whatever. The problem is, we've sent, literally, hundreds of thousands of press releases. Searching is slow and complex. For example, a press release sent to Queens, NY should show up in the search I mentioned above even though it wasn't specifically sent to New York City, because Queens is a subset of New York City. We may also want to implement "and" and "or" and negation and text search to the query to create complex searches. These searches also have to be fast enough to function as dynamic RSS feeds. I really don't know anything about search theory, or how it's properly done. The way we are getting by right now is using a data mart to store the locations the releases were sent to in a single table. However, because of the subset thing mentioned above, the data mart is gigantic with millions of rows. And we haven't even implemented cities yet, and there are about 50,000 cities in the United States, which will exponentially increase the size of the data mart by so much I'm afraid it just won't work anymore. Anyway, I realize this is not a simple question and there won't be a "do this" answer. However, I'm hoping one of you can point me in the right direction where I can learn about how massive searches are done? Because I really know nothing about it. And such a search engine is turning out to be incredibly difficult to make. Thanks! I know there must be a way because if Google can search the entire internet we must be able to search our own database :-)

    Read the article

  • Looking for merge tool with very good in-line-comparison support

    - by peter p
    I have seen this topic discussed several times, but emphasis is on "very good in-line-comparison" here, which was not really covered by those threads. E.g. I would like the tool to recognize and highlight that the resource "colorpicker_newstring" has been added when comparing the following two blocks. WinMerge and Kdiff both fail... Does anyone know a software that does not? I am using Windows. Ah, and I'd prefer free/OS software, of course. Thanks a lot in advance, Peter File A: <resource name="colorpicker_title">Color picker</resource> <resource name="colorpicker_apply">Apply</resource> <resource name="colorpicker_transparent">Transparent</resource> <resource name="colorpicker_htmlcolor">HTML color</resource> <resource name="colorpicker_red">Red</resource> <resource name="colorpicker_newstring">New String</resource> <resource name="colorpicker_green">Green</resource> <resource name="colorpicker_blue">Blue</resource> <resource name="colorpicker_alpha">Alpha</resource> <resource name="colorpicker_recentcolors">Recently used colors</resource> File B: <resource name="colorpicker_title">Sélecteur de couleur</resource> <resource name="colorpicker_apply">Appliquer</resource> <resource name="colorpicker_transparent">Transparent</resource> <resource name="colorpicker_htmlcolor">Couleur HTML</resource> <resource name="colorpicker_red">Rouge</resource> <resource name="colorpicker_green">Vert</resource> <resource name="colorpicker_blue">Bleu</resource> <resource name="colorpicker_alpha">Alpha</resource> <resource name="colorpicker_recentcolors">Couleurs récentes</resource>

    Read the article

  • How to sanely configure security policy in Tomcat 6

    - by Chas Emerick
    I'm using Tomcat 6.0.24, as packaged for Ubuntu Karmic. The default security policy of Ubuntu's Tomcat package is pretty stringent, but appears straightforward. In /var/lib/tomcat6/conf/policy.d, there are a variety of files that establish default policy. Worth noting at the start: I've not changed the stock tomcat install at all -- no new jars into its common lib directory(ies), no server.xml changes, etc. Putting the .war file in the webapps directory is the only deployment action. the web application I'm deploying fails with thousands of access denials under this default policy (as reported to the log thanks to the -Djava.security.debug="access,stack,failure" system property). turning off the security manager entirely results in no errors whatsoever, and proper app functionality What I'd like to do is add an application-specific security policy file to the policy.d directory, which seems to be the recommended practice. I added this to policy.d/100myapp.policy (as a starting point -- I would like to eventually trim back the granted permissions to only what the app actually needs): grant codeBase "file:${catalina.base}/webapps/ROOT.war" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/lib/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/classes/-" { permission java.security.AllPermission; }; Note the thrashing around attempting to find the right codeBase declaration. I think that's likely my fundamental problem. Anyway, the above (really only the first two grants appear to have any effect) almost works: the thousands of access denials are gone, and I'm left with just one. Relevant stack trace: java.security.AccessControlException: access denied (java.io.FilePermission /var/lib/tomcat6/webapps/ROOT/WEB-INF/classes/com/foo/some-file-here.txt read) java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) java.security.AccessController.checkPermission(AccessController.java:546) java.lang.SecurityManager.checkPermission(SecurityManager.java:532) java.lang.SecurityManager.checkRead(SecurityManager.java:871) java.io.File.exists(File.java:731) org.apache.naming.resources.FileDirContext.file(FileDirContext.java:785) org.apache.naming.resources.FileDirContext.lookup(FileDirContext.java:206) org.apache.naming.resources.ProxyDirContext.lookup(ProxyDirContext.java:299) org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:1937) org.apache.catalina.loader.WebappClassLoader.findResource(WebappClassLoader.java:973) org.apache.catalina.loader.WebappClassLoader.getResource(WebappClassLoader.java:1108) java.lang.ClassLoader.getResource(ClassLoader.java:973) I'm pretty convinced that the actual file that's triggering the denial is irrelevant -- it's just some properties file that we check for optional configuration parameters. What's interesting is that: it doesn't exist in this context the fact that the file doesn't exist ends up throwing a security exception, rather than java.io.File.exists() simply returning false (although I suppose that's just a matter of the semantics of the read permission). Another workaround (besides just disabling the security manager in tomcat) is to add an open-ended permission to my policy file: grant { permission java.security.AllPermission; }; I presume this is functionally equivalent to turning off the security manager. I suppose I must be getting the codeBase declaration in my grants subtly wrong, but I'm not seeing it at the moment.

    Read the article

  • Documentation for java.util.concurrent.locks.ReentrantReadWriteLock

    - by Andrei Taptunov
    Disclaimer: I'm not very good at Java and just comparing read/writer locks between C# and Java to understand this topic better & decisions behind both implementations. There is JavaDoc about ReentrantReadWriteLock. It states the following about upgrade/downgrade for locks: Lock downgrading ... However, upgrading from a read lock to the write lock is not possible. It also has the following example that shows manual upgrade from read lock to write lock: // Here is a code sketch showing how to exploit reentrancy // to perform lock downgrading after updating a cache void processCachedData() { rwl.readLock().lock(); if (!cacheValid) { // upgrade lock manually #1: rwl.readLock().unlock(); // must unlock first to obtain writelock #2: rwl.writeLock().lock(); if (!cacheValid) { // recheck ... } ... } use(data); rwl.readLock().unlock(); Does it mean that actually the sample from above may not behave correctly in some cases - I mean there is no lock between lines #1 & #2 and underlying structure is exposed to changes from other threads. So it can not be considered as the correct way to upgrade the lock or do I miss something here?

    Read the article

  • Why can't I create a database in an empty ASP MVC 2 project using Project->Add->New Item->SQL Server

    - by Dr Dork
    I'm diving head first into ASP MVC and am playing around with creating and manipulating a database. I did a search and found this tutorial for creating a database, however when I follow it, I get this error when trying to add a new database to my fresh, empty ASP MVC 2 project... A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) The only requirement the tutorial mentioned was SQL Server Express, but when I went to download it, it said it was already installed. I'm assuming it was part of the VS 2010 RC I installed and am running. So I don't know what else I need if I am missing something. This is all new to me, so I'm sure I'm missing something obvious here and after I'm done posting this question, I plan to do some more research into the topic of databases and how they work with ASP MVC. In the meantime, I was you could help me answer a couple high level questions... What am I missing/forgetting to do that is causing this error? Any suggestions for good resources/tutorials that focus on using databases with ASP MVC? I've done a lot of database programming in the past, so I'm familiar with the concepts of relational databases and the SQL language. I wish I could find a good resource for learning how to work with them in an ASP dev environment, as well as a good breakdown of all the related technologies used for working with them (i.e. LINQ to SQL). Thanks so much in advance for all your help! I'm going to start researching these questions right now.

    Read the article

  • Tokyo Cabinet Tuning Parameters

    - by user235478
    Hello, I have been trying to find a better Tokyo Cabinet (or Tokyo Tyrant) configuration for my application, but I don't know exactly how. I know what some parameters mean but I want to have a fine tuning control, so I need to know the impact of each one. The Tokyo documentation is really good but not at this point. Does any one can help me? Thanks. TCHDB -> *bool tchdbtune(TCHDB *hdb, int64_t bnum, int8_t apow, int8_t fpow, uint8_t opts);* How do I use: bnum, apow and fpow? TCBDB -> *bool tcbdbtune(TCBDB *bdb, int32_t lmemb, int32_t nmemb, int64_t bnum, int8_t apow, int8_t fpow, uint8_t opts);* How do I use: lmemb, nmemb, bnum, apow and fpow? TCFDB -> *bool tcfdbtune(TCFDB *fdb, int32_t width, int64_t limsiz);* How do I use: width and limsiz? Note: I am only putting this to get all types of database in the topic, this one is really simple. TCTDB -> *bool tctdbtune(TCTDB *tdb, int64_t bnum, int8_t apow, int8_t fpow, uint8_t opts);* How do I use: bnum, apow and fpow?

    Read the article

  • How to protect copyright on large web applications?

    - by Saif Bechan
    Recently I have read about Myows, described as "the universal copyright management and protection app for smart creatives". It is used to protect copyright and more. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so new. Is protecting copyright a good idea for a large web application? If so, do you think Myows will be worth using, or are there better ways to achieve that? Update: Wow, there is no better person to have answered this question like the creator himself. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • Using git pull to track a remote branch without merging

    - by J Barlow
    I am using git to track content which is changed by some people and shared "read-only" with others. The "readers" may from time to time need to make a change, but mostly they will not. I want to allow for the git "writers" to rebase pushed branches** if need be, and ensure that the "readers" never accidentally get a merge. That's normally easy enough. git pull origin +master There's one case that seems to cause problems. If a reader makes a local change, the command above will merge. I want pull to be fully automatic if the reader has not made local changes, while if they have made local changes, it should stop and ask for input. I want to track any upstream changes while being careful about merging downstream changes. In a way, I don't really want to pull. I want to track the master branch exactly. ** (I know this is not a best practice, but it seems necessary in our case: we have one main branch that contains most of the work and some topic branches for specific customers with minor changes that need to be isolated. It seems easiest to frequently rebase to keep the topics up to date.)

    Read the article

  • Is jQuery always the answer?

    - by Kibbee
    I've come across a couple questions, such as this one, and I really have to wonder why "Use jQuery" seems to be the answer when somebody asks how to do something in JavaScript. I understand that jQuery can save you a lot of time, and can help you out a lot, especially when you are doing a lot of fancy JavaScript in your site. However, in instances like this, and in many other instances, it seems like it's just jumping around the problem instead of answering the question. I also feel like this builds too much dependency into libraries. I've seen way too many developers that simply rely too much on libraries, and if they encounter a situation where they didn't have the library, they would be completely unable to function. I feel like there are already enough developers who don't know JavaScript, without just telling everybody to not learn JavaScript, and use jQuery. So, just to reiterate the question. Do you think there's too much of a tendency to use jQuery, for small pieces of JavaScript, when most of the functionality of jQuery isn't being used. Should developers be fluent in the use of bare JavaScript so they don't get too dependent on using libraries? [Additional related conversation topic] Does the existence of jQuery give too much slack to web browser developers who write the JavaScript engines? If we just have workarounds to cover all the inconsistencies in JavaScript, what pressure is there on browser makers to ensure that their JavaScript library works as it should. I feel like this extrapolates the same problem discussed in SO Podcast #36 of "be conservative in what you send, liberal in what you accept". By being so liberal with bad JavaScript engines, and using a common library to work around the flaws, we are promoting their use, and extending the problem.

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >