Search Results

Search found 8897 results on 356 pages for 'hit detect'.

Page 301/356 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • How to access hidden template in unnamed namespace?

    - by Johannes Schaub - litb
    Here is a tricky situation, and i wonder what ways there are to solve it namespace { template <class T> struct Template { /* ... */ }; } typedef Template<int> Template; Sadly, the Template typedef interferes with the Template template in the unnamed namespace. When you try to do Template<float> in the global scope, the compiler raises an ambiguity error between the template name and the typedef name. You don't have control over either the template name or the typedef-name. Now I want to know whether it is possible to: Create an object of the typedefed type Template (i.e Template<int>) in the global namespace. Create an object of the type Template<float> in the global namespace. You are not allowed to add anything to the unnamed namespace. Everything should be done in the global namespace. This is out of curiosity because i was wondering what tricks there are for solving such an ambiguity. It's not a practical problem i hit during daily programming.

    Read the article

  • Set scaleX property on a Sprite without altering the child inside

    - by grammar
    Is this possible? My site is set up with next and prev buttons on the right and left sides respectively, and as you roll over either of the hit areas around the buttons a Sprite fades in which contains a TextField that describes the next page. Said Sprite calls the StartDrag() method, so it follows the mouse within the bounds, which is all fine and dandy on the left side of the page. Adobe, however, seems to have forgotten to put a way to dynamically alter the registration point of a Sprite, MC, whatever else, so when you roll over the right side of the page, the sprite is displayed from the top left and is mostly off the stage. Trying to hack this problem I have tried numerous things ( classes written by others, other hacks) and the best that I have found is to use the scaleX method on the Sprite, changing the scale to -1. This, of course, makes the Sprite seem like it's reflected from its normal point, which means all its children show up backwards. Is there anyway I can use this hack without it altering the text? OR Does anyone know a different way to go about displaying a Sprite from another corner? Any way to make a Sprite fade in and follow the mouse on the LEFT HAND side of the mouse pointer? Thank you very much in advance. Here is a snippet to give an idea of what's happening: naxtPage.labelBG.scaleX = -1; nextPage.labelBG.startDrag( true, nextHitRect ); nextPage.labelBG.x = nextPage.labelBG.parent.mouseX; nextPage.labelBG.y = nextPage.labelBG.parent.mouseY; Cheers

    Read the article

  • gcc, strict-aliasing, and horror stories

    - by Joseph Quinsey
    In http://stackoverflow.com/questions/2906365/gcc-strict-aliasing-and-casting-through-a-union I asked whether anyone had encountered problems with union punning through pointers. So far, the answer seems to be No. This question is broader: do you have any horror stories about gcc and strict-aliasing? Background: Quoting from AndreyT's answer in http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041: "Strict aliasing rules are rooted in parts of the standard that were present in C and C++ since the beginning of [standardized] times. The clause that prohibits accessing object of one type through a lvalue of another type is present in C89/90 (6.3) as well as in C++98 (3.10/15). ... It is just that not all compilers wanted (or dared) to enforce it or rely on it." Well, gcc is now daring to do so, with its -fstrict-aliasing switch. And this has caused some problems. See, for example, the excellent article http://davmac.wordpress.com/2009/10/ about a Mysql bug, and the equally excellent discussion in http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html. Some other less-relevant links: http://stackoverflow.com/questions/1225741/performance-impact-of-fno-strict-aliasing http://stackoverflow.com/questions/754929/strict-aliasing http://stackoverflow.com/questions/262379/when-is-char-safe-for-strict-pointer-aliasing http://stackoverflow.com/questions/725138/how-to-detect-strict-aliasing-at-compile-time So to repeat, do you have a horror story of your own? Problems not indicated by -Wstrict-aliasing would, of course, be preferred. And other C compilers are also welcome.

    Read the article

  • Select all points in a matrix within 30m of another point

    - by pinnacler
    So if you look at my other posts, it's no surprise I'm building a robot that can collect data in a forest, and stick it on a map. We have algorithms that can detect tree centers and trunk diameters and can stick them on a cartesian XY plane. We're planning to use certain 'key' trees as natural landmarks for localizing the robot, using triangulation and trilateration among other methods, but programming this and keeping data straight and efficient is getting difficult using just Matlab. Is there a technique for sub-setting an array or matrix of points? Say I have 1000 trees stored over 1km (1000m), is there a way to say, select only points within 30m radius of my current location and work only with those? I would just use a GIS, but I'm doing this in Matlab and I'm unaware of any GIS plugins for Matlab. I forgot to mention, this code is going online, meaning it's going on a robot for real-time execution. I don't know if, as the map grows to several miles, using a different data structure will help or if calculating every distance to a random point is what a spatial database is going to do anyway. I'm thinking of mirroring two arrays, one sorted by X and the other by Y. Then bubble sorting to determine the 30m range in that. I do the same for both arrays, X and Y, and then have a third cross link table that will select the individual values. But I don't know, what that's called, how to program that and I'm sure someone already has so I don't want to reinvent the wheel. Cartesian Plane GIS

    Read the article

  • Webcam video stream processing.

    - by vikramtheone
    Hi Guys, I'm working with an image processing project, my final goal is to detect features on a real time video and finally track those features. I will be working with an Embedded Processor Platform called Freescale's i.MX515, it is a 32-bit media processor running on Ubuntu 9.04. Right now I'm working on the algorithms to locate the features, so, I'm using still images. When I'm satisfied with the results I will have to start using a video stream and I don't want to make use of a video file as a source stream, because then I will have to worry about video decoders then. Instead I would like to plug in a USB Wecam to the embedded platform (It has USB ports on it), directly take the frames as they are captured and send it to my application. I will take care to buy a webcam which will be supported in Linux (Device driver). But my question is will I be able to capture the incoming video stream from the webcam and send it to my application? Will I be able to configure the webcam and DMA to write the incoming frames in a particular memory location whose pointer I can simply pass to my application? (Confused!!!) I hope I could convey my doubts, can anyone guide me with what steps that I have to take to achieve all of these easily? Do you foresee any impossibility here? Help!!! Regards Vikram

    Read the article

  • Background color remaining when switching views

    - by Guy
    Hi Guys. I have a UIViewController named equationVC who's user interface is being programmatically created from another NSObject class called equationCon. Upon loading equationVC, a method called chooseInterface is called from the equationCon class. I have a global variable (globalVar) that points to a user defined string. chooseInterface finds a method in the equationCon class that matches the string globalVar points to. In this case, let's say that globalVar points to a string that is called "methodThatMatches." In methodThatMatches, another view controller needs to show the results of what methodThatMatches did. methodThatMatches creates a new equationVC that calls upon methodThatMatches2. As a test, each method changes the color of the background. When the application starts up, I get a purple background, but as soon as I hit backwards I get another purple screen, which should be yellow. I do not think that I am release the view properly. Can anyone help? -(void)chooseInterface { NSString* equationTemp = [globalVar stringByReplacingOccurrencesOfString:@" " withString:@""]; equationTemp = [equationTemp stringByReplacingOccurrencesOfString:@"'" withString:@""]; SEL equationName = NSSelectorFromString(equationTemp); NSLog(@"selector! %@",NSStringFromSelector(equationName)); if([self respondsToSelector:equationName]){ [self performSelector:equationName]; } } -(void)methodThatMatches{ self.equationVC.view.backgroundColor = [UIColor yellowColor]; [setGlobalVar:@"methodThatMatches2"]; EquationVC* temp = [[EquationVC alloc] init]; [[self.equationVC navigationController] pushViewController:temp animated:YES ]; [temp release]; } -(void)methodThatmatches2{ self.equationVC.view.backgroundColor = [UIColor purpleColor]; }

    Read the article

  • how can I save/keep-in-sync an in-memory graph of objects with the database?

    - by Greg
    Question - What is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Background: That is say I have the classes Node and Relationship, and the application is building up a graph of related objects using these classes. There might be 1000 nodes with various relationships between them. The application needs to query the structure hence an in-memory approach is good for performance no doubt (e.g. traverse the graph from Node X to find the root parents) The graph does need to be persisted however into a database with tables NODES and RELATIONSHIPS. Therefore what is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Ideal requirements would include: build up changes in-memory and then 'save' afterwards (mandatory) when saving, apply updates to database in correct order to avoid hitting any database constraints (mandatory) keep persistence mechanism separate from model, for ease in changing persistence layer if needed, e.g. don't just wrap an ADO.net DataRow in the Node and Relationship classes (desirable) mechanism for doing optimistic locking (desirable) Or is the overhead of all this for a smallish application just not worth it and I should just hit the database each time for everything? (assuming the response times were acceptable) [would still like to avoid if not too much extra overhead to remain somewhat scalable re performance]

    Read the article

  • "Order By" in LINQ-to-SQL Causes performance issues

    - by panamack
    I've set out to write a method in my C# application which can return an ordered subset of names from a table containing about 2000 names starting at the 100th name and returning the next 20 names. I'm doing this so I can populate a WPF DataGrid in my UI and do some custom paging. I've been using LINQ to SQL but hit a snag with this long executing query so I'm examining the SQL the LINQ query is using (Query B below). Query A runs well: SELECT TOP (20) [t0].[subject_id] AS [Subject_id], [t0].[session_id] AS [Session_id], [t0].[name] AS [Name] FROM [Subjects] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP (100) [t1].[subject_id] FROM [Subjects] AS [t1] WHERE [t1].[session_id] = 1 ORDER BY [t1].[name] ) AS [t2] WHERE [t0].[subject_id] = [t2].[subject_id] ))) AND ([t0].[session_id] = 1) Query B takes 40 seconds: SELECT TOP (20) [t0].[subject_id] AS [Subject_id], [t0].[session_id] AS [Session_id], [t0].[name] AS [Name] FROM [Subjects] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP (100) [t1].[subject_id] FROM [Subjects] AS [t1] WHERE [t1].[session_id] = 1 ORDER BY [t1].[name] ) AS [t2] WHERE [t0].[subject_id] = [t2].[subject_id] ))) AND ([t0].[session_id] = 1) ORDER BY [t0].[name] When I add the ORDER BY [t0].[name] to the outer query it slows down the query. How can I improve the second query? This was my LINQ stuff Nick int sessionId = 1; int start = 100; int count = 20; // Query subjects with the shoot's session id var subjects = cldb.Subjects.Where<Subject>(s => s.Session_id == sessionId); // Filter as per params var orderedSubjects = subjects .OrderBy<Subject, string>( s => s.Col_zero ); var filteredSubjects = orderedSubjects .Skip<Subject>(start) .Take<Subject>(count);

    Read the article

  • Is there a way to programmatically tell if particular block of memory was not freed by FastMM?

    - by Wodzu
    I am trying to detect if a block of memory was not freed. Of course, the manager tells me that by dialog box or log file, but what if I would like to store results in a database? For example I would like to have in a database table a names of routines which allocated given blocks. After reading a documentation of FastMM I know that since version 4.98 we have a possibility to be notified by manager about memory allocations, frees and reallocations as they occur. For example OnDebugFreeMemFinish event is passing to us a PFullDebugBlockHeader which contains useful informations. There is one thing that PFullDebugBlockHeader is missing - the information if the given block was freed by the application. Unless OnDebugFreeMemFinish is called only for not freed blocks? This is which I do not know and would like to find out. The problem is that even hooking into OnDebugFreeMemFinish event I was unable to find out if the block was freed or not. Here is an example: program MemLeakTest; {$APPTYPE CONSOLE} uses FastMM4, ExceptionLog, SysUtils; procedure MemFreeEvent(APHeaderFreedBlock: PFullDebugBlockHeader; AResult: Integer); begin //This is executed at the end, but how should I know that this block should be freed //by application? Unless this is executed ONLY for not freed blocks. end; procedure Leak; var MyObject: TObject; begin MyObject := TObject.Create; end; begin OnDebugFreeMemFinish := MemFreeEvent; Leak; end. What I am missing is the callback like: procedure OnMemoryLeak(APointer: PFullDebugBlockHeader); After browsing the source of FastMM I saw that there is a procedure: procedure LogMemoryLeakOrAllocatedBlock(APointer: PFullDebugBlockHeader; IsALeak: Boolean); which could be overriden, but maybe there is an easier way?

    Read the article

  • Playing video and audio in iPhone not working...

    - by Scott
    So we have buttons linked up to display images/videos/audio on click depending on a check we do earlier. That part works fine. It knows which one to play, however, when we click the buttons for video and audio, nothing happens. The image one works fine. The video and audio are being taken for a URL online, they are not local, but everywhere said this was still possible. Here is a little snippet of the code where we play the two files: if ( [fName hasSuffix:@".png"]) { NSLog(@"PICTURE"); NSURL *url = [NSURL URLWithString: fName]; UIImage *image = [UIImage imageWithData: [NSData dataWithContentsOfURL:url]]; self.view = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; // self.view.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"MainBG.jpg"]]; [self.view addSubview:[[UIImageView alloc] initWithImage:image]]; } if ( [fName hasSuffix:@".mp4"]) { NSLog(@"VIDEO"); //NSString *path = [[NSBundle mainBundle] pathForResource:fName ofType:@"mp4"]; //NSLog(path); NSURL *url = [NSURL fileURLWithPath:fName]; MPMoviePlayerController *player = [[MPMoviePlayerController alloc] initWithContentURL:url]; [player play]; } if ( [fName hasSuffix:@".mp3"]) { NSLog(@"AUDIO"); NSURL *url = [NSURL fileURLWithPath:fName]; NSData *soundData = [NSData dataWithContentsOfURL:url]; AVAudioPlayer *avPlayer = [[AVAudioPlayer alloc] initWithData:soundData error: nil]; [avPlayer play]; } See anything wrong? By the way it compiles and runs, but nothing happens when we hit the button that executes that code.

    Read the article

  • Eclipse Java Content Assist not working

    - by Damo
    The Content Assist in Eclipse 3.4 and 3.5 has stopped working for me. When I type in the first few characters of a class and hit CRTL-space then after a delay I get the following error message It does not matter what proposals I enable/disable, I will get this (or similar) message. I have tried: Changing the Xms/Xmx values Starting Eclipse with -clean Creating a new workspace and importing my projects However none of these have worked. I have seen some posts suggesting that other apps may be taking over the CRTL-space or otherwise interfering, however I have nothing aside from a fresh Eclipse running and the problem persists. My problem is very similar to the one covered in this post albeit on a later version and on OSX 10.5.7. Does anyone have any suggestions for how this might be resolved? Thanks. UPDATE: To anyone interested I've had the best results by using Eclipse 3.5 Classic (ie. doesn't include Mylyn). I've also used the settings specified in the bug reports linked to by VonC below. Interestingly Classic doesn't come with some views eg. Snippets, but these are easy to drop in from another distro. UPDATE 2: This problem actually persisted even with the latest versions of Eclipse (3.6 M1). It is caused by a large JAR file generated my Altova Mapforce to handle EDIFACT transformations in our application. It is reproducible by adding this JAR to the buildpath and no changes to Content Assist settings help. The bug (and JAR) can be seen at https://bugs.eclipse.org/bugs/show_bug.cgi?id=289057

    Read the article

  • How to display only one letter in Flex Text Layout Framework ContainerController?

    - by rattkin
    I'm trying to implement dropped initials feature into my Flex application. Since Text Layout Framework does not support floating, the only known solution is to create additional containers that will be linked together, displaying the same textflow. Width and positioning of these containers has to be set in such a way that it will pretend that it's float. I'm using the same solution for dropped initials. Basically, I'm creating three containers, one for the initial letter (first letter in text flow), the other for text floating around, and the 3rd one to display text below these two. All these containers share one textflow. I have big issues with forcing controller to display only one letter from the text flow, and size it accordingly, so that it wont take any unnecessary aditional space and won't get any more text into it. Using ContainerController.getContentBounds() returns me size of whole sprite of the letter (with ascent/descent empty parts), not the height/width of the actual rendered letter. I'm using textFlow.flowComposer.getLineAt(0).getTextLine().getAtomBounds(0), but i think it's still not right. Also, even if I set container for this dimensions, it sometimes display additional text in it, especially for bigger fonts. See screen : Also, if I set width to just 1px less that contentBounds, things are going crazy, containers are moved around, positioned with big margins, etc. How should I solve this? Is it a bug in TLF / Player? Can I fix it somehow? Can I detect the size of the letter, or make containercontroller autosize to fit just one letter only?

    Read the article

  • How to play extracted wave file byte array in C#?

    - by user261924
    At the moment i have managed to separate the left and right channel of a WAVE file and have included the header in a byte[] array. My next step is to be about to play both channels. How can this be done? Here is a code snippet: byte[] song_left = new byte[fa.Length]; byte[] song_right = new byte[fa.Length]; int p = 0; for (int c = 0; c < 43; c++) { song_left[p] = header[c]; p++; } int q = 0; for (s = startByte; s < length; s = s + 3) { song_left[s] = sLeft[q]; q++; s++; song_left[s] = sLeft[q]; q++; } p = 0; for (int c = 0; c < 43; c++) { song_right[p] = header[c]; p++; } This part is reading the header and data from both the right and light channel and saving it to array sLeft[] and sRight[]. This part is working perfectly. Once I obtained the byte arrays, I did the following: System.IO.File.WriteAllBytes("c:\\left.wav", song_left); System.IO.File.WriteAllBytes("c:\\right.wav", song_right); Added a button to play the saved wave file: private void button2_Click(object sender, EventArgs e) { spWave = new SoundPlayer("c:\\left.wav"); spWave.Play(); } Once I hit the play button, this error appers: An unhandled exception of type 'System.InvalidOperationException' occurred in System.dll Additional information: The wave header is corrupt. Any ideas?

    Read the article

  • NullPointerException with static variables

    - by tomekK
    I just hit very strange (to me) behaviour of java. I have following classes: public abstract class Unit { public static final Unit KM = KMUnit.INSTANCE; public static final Unit METERS = MeterUnit.INSTANCE; protected Unit() { } public abstract double getValueInUnit(double value, Unit unit); protected abstract double getValueInMeters(double value); } And: public class KMUnit extends Unit { public static final Unit INSTANCE = new KMUnit(); private KMUnit() { } //here are abstract methods overriden } public class MeterUnit extends Unit { public static final Unit INSTANCE = new MeterUnit(); private MeterUnit() { } ///abstract methods overriden } And my test case: public class TestMetricUnits extends TestCase { @Test public void testConversion() { System.out.println("Unit.METERS: " + Unit.METERS); System.out.println("Unit.KM: " + Unit.KM); double meters = Unit.KM.getValueInUnit(102.11, Unit.METERS); assertEquals(0.10211, meters, 0.00001); } } 1) MKUnit and MeterUnit are both singletons initialized statically, so during class loading. Constructors are private, so they can't be initialized anywhere else. 2) Unit class contains static final references to MKUnit.INSTANCE and MeterUnit.INSTANCE I would expect that: KMUnit class is loaded and instance is created. MeterUnit class is loaded and instance is created. Unit class is loaded and both KM and METERS variable are initialized, they are final so they cant be changed. But when I run my test case in console with maven my result is: T E S T S Running de.audi.echargingstations.tests.TestMetricUnits Unit.METERS: m Unit.KM: null Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.089 sec <<< FAILURE! - in de.audi.echargingstations.tests.TestMetricUnits testConversion(de.audi.echargingstations.tests.TestMetricUnits) Time elapsed: 0.011 sec <<< ERROR! java.lang.NullPointerException: null at de.audi.echargingstations.tests.TestMetricUnits.testConversion(TestMetricUnits.java:29) Results : Tests in error: TestMetricUnits.testConversion:29 NullPointer And the funny part is that, when I run this test from eclipse via JUnit runner everything is fine, I have no NullPointerException and in console I have: Unit.METERS: m Unit.KM: km So the question is: what can be the reason that KM variable in Unit is null (and in the same time METERS is not null)

    Read the article

  • Why wouldn't the default Control Adapter mappings work on Chrome or Safari?

    - by Deane
    I have confirmed that my Control Adapters are not triggering in Chrome and Safari. I've debugged, and the breakpoints inside the adapters just don't get hit in Chrome/Safari, when they work perfectly find in Firefox/IE. So, for Chrome/Safari, IIS is just ignoring the mapping. My AdapterMappings.browser file looks like this: <browsers> <browser refID="Default"> <controlAdapters> [...adapters here....] </controlAdapters> </browser> </browsers> This should provide mappings for all browsers, correct? I used the Charles proxy to check what user agents were being sent. They are: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1064 Safari/532.5 Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 Safari/531.22.7 Any idea why this would be? Everything I've read tells me that my browser mappings are correct? And, as I said this works for IE/Firefox, so I know my configuration is technically correct.

    Read the article

  • Poor performance using RMI-proxies with Swing components

    - by Patrick
    I'm having huge performance issues when I add RMI proxy references to a Java Swing JList-component. I'm retrieving a list of user Profiles with RMI from a server. The retrieval itself takes just a second or so, so that's acceptable under the circumstances. However, when I try to add these proxies to a JList, with the help of a custom ListModel and a CellRenderer, it takes between 30-60 seconds to add about 180 objects. Since it is a list of users' names, it's preferrable to present them alphabetically. The biggest performance hit is when I sort the elements as they get added to the ListModel. Since the list will always be sorted, I opted to use the built-in Collections.binarySearch() to find the correct position for the next element to be added, and the comparator uses two methods that are defined by the Profile interface, namely getFirstName() and getLastName(). Is there any way to speed this process up, or am I simply implementing it the wrong way? Or is this a "feature" of RMI? I'd really love to be able to cache some of the data of the remote objects locally, to minimize the remote method calls.

    Read the article

  • ASP.Net MVC, JS injection and System.ArgumentException - Illegal Characters in path

    - by Mose
    Hi, In my ASP.Net MVC application, I use custom error handling. I want to perform custom actions for each error case I meet in my application. So I override Application_Error, get the Server.GetLastError(); and do my business depending on the exception, the current user, the current URL (the application runs on many domains), the user IP, and many others. Obviousely, the application is often the target of hackers. In almost all the case it's not a problem to detect and manage it, but for some JS URL attacks, my error handling does not perform what I want it to do. Ex (from logs) : http://localhost:1809/Scripts/]||!o.support.htmlSerialize&&[1 When I got such an URL, an exception is raised when accessing the ConnectionStrings section in the web.config, and I can't even redirect to another URL. It leads to a "System.ArgumentException - Illegal Characters in path, etc." The screenshot below shows the problem : http://screencast.com/t/Y2I1YWU4 An obvious solution is to write a HTTP module to filter the urls before they reach my application, but I'd like to avoid it because : I like having the whole security being managed in one place (in the Application_Error() method) In the module I cannot access the whole data I have in the application itself (application specific data I don't want to debate here) Questions : Did you meet this problem ? How did you manage it ? Thanks for you suggestions, Mose

    Read the article

  • How to receive Email in JEE application

    - by Hank
    Obviously it's not so difficult to send out emails from a JEE application via JavaMail. What I am interested in is the best pattern to receive emails (notification bounces, mostly)? I am not interested in IMAP/POP3-based approaches (polling the inbox) - my application shall react to inbound emails. One approach I could think of would be Keep existing MTA (postfix on linux in my case) - ops team already knows how to configure / operate it For every mail that arrives, spawn a Java app that receives the data and sends it off via JMS. I could do this via an entry in /etc/aliases like myuser: "|/path/to/javahelper" with javahelper calling the Java app, passing STDIN along. MDB (part of JEE application) receives JMS message, parses it, detects bounce message and acts accordingly. Another approach could be Open a listening network socket on port 25 on the JEE application container. Associate a SessionBean with the socket. Bean is part of JEE application and can parse/detect bounces/handle the messages directly. Keep existing MTA as inbound relay, do all its security/spam filtering, but forward emails to myuser (that pass the filter) to the JEE application container, port 25. The first approach I have done before (albeit in a different language/setup). From a performance and (perceived) cleanliness point of view, I think the second approach is better, but it would require me to provide a proper SMTP transport implementation. Also, I don't know if it's at all possible to connect a network socket with a bean... What is your recommendation? Do you have details about the second approach?

    Read the article

  • Building a ctypes-"based" C library with distutils

    - by Robie Basak
    Following this recommendation, I have written a native C extension library to optimise part of a Python module via ctypes. I chose ctypes over writing a CPython-native library because it was quicker and easier (just a few functions with all tight loops inside). I've now hit a snag. If I want my work to be easily installable using distutils using python setup.py install, then distutils needs to be able to build my shared library and install it (presumably into /usr/lib/myproject). However, this not a Python extension module, and so as far as I can tell, distutils cannot do this. I've found a few references to people other people with this problem: Someone on numpy-discussion with a hack back in 2006. Somebody asking on distutils-sig and not getting an answer. Somebody asking on the main python list and being pointed to the innards of an existing project. I am aware that I can do something native and not use distutils for the shared library, or indeed use my distribution's packaging system. My concern is that this will limit usability as not everyone will be able to install it easily. So my question is: what is the current best way of distributing a shared library with distutils that will be used by ctypes but otherwise is OS-native and not a Python extension module? Feel free to answer with one of the hacks linked to above if you can expand on it and justify why that is the best way. If there is nothing better, at least all the information will be in one place.

    Read the article

  • Why MSMQ won't send a space character?

    - by cyclotis04
    I'm exploring MSMQ services, and I wrote a simple console client-server application that sends each of the client's keystrokes to the server. Whenever hit a control character (DEL, ESC, INS, etc) the server understandably throws an error. However, whenever I type a space character, the server receives the packet but doesn't throw an error and doesn't display the space. Server: namespace QIM { class Program { const string QUEUE = @".\Private$\qim"; static MessageQueue _mq; static readonly object _mqLock = new object(); static XmlSerializer xs; static void Main(string[] args) { lock (_mqLock) { if (!MessageQueue.Exists(QUEUE)) _mq = MessageQueue.Create(QUEUE); else _mq = new MessageQueue(QUEUE); } xs = new XmlSerializer(typeof(string)); _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); while (Console.ReadKey().Key != ConsoleKey.Escape) { } } static void OnReceive(IAsyncResult result) { Message msg; lock (_mqLock) { try { msg = _mq.EndReceive(result); Console.Write("."); Console.Write(xs.Deserialize(msg.BodyStream)); } catch (Exception ex) { Console.Write(ex); } } _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); } } } Client: namespace QIM_Client { class Program { const string QUEUE = @".\Private$\qim"; static MessageQueue _mq; static void Main(string[] args) { if (!MessageQueue.Exists(QUEUE)) _mq = MessageQueue.Create(QUEUE); else _mq = new MessageQueue(QUEUE); ConsoleKeyInfo key = new ConsoleKeyInfo(); while (key.Key != ConsoleKey.Escape) { key = Console.ReadKey(); _mq.Send(key.KeyChar.ToString()); } } } } Client Input: Testing, Testing... Server Output: .T.e.s.t.i.n.g.,..T.e.s.t.i.n.g...... You'll notice that the space character sends a message, but the character isn't displayed.

    Read the article

  • Help with jQuery Ajax calling ASMX which returns string.

    - by Jason Evans
    Hi there. I have the following jQuery ajax call setup: function Testing() { var result = ''; $.ajax({ url: 'BookingUtils.asmx/GetNonBookableSlots', dataType: 'text', error: function(error) { alert('GetNonBookableSlots Error'); }, success: function(data) { alert('GetNonBookableSlots'); result = data; } }); return result; } Here is the web service I'm trying to call: [WebMethod] public string GetNonBookableSlots() { return "fhsdfuhsiufhsd"; } When I run the jQuery code, there is no error or success event fired (none of the alerts are called). In fact, nothing happens at all, the javascript code just moves on to return statement at the end. I put a breakpoint in the web service code and it does get hit, but when I leave that method I end up on the return statement anyway. Can someone give me some tips on how I should be configuring the ajax call correctly, as I feel that I'm doing this wrong. The webservice just needs to return a string, no XML or JSON involved. Cheers. Jas.

    Read the article

  • Stale connection with Pheanstalk

    - by token47
    I'm using beanstalkd to offload some work to other machines. The setup is a bit unusual, the server is on the internet (public ip) but the consumers are behind adsl lines on some peoples homes. So there is a linux server as client going out through a dynamic ip and connecting to the server to get a job. It's all PHP and I'm using pheanstalk library. Everything runs smoothly for some time, but then the adsl changes the IP (every 24h hours the provider forces a disconnect-reconnect) the client just hangs, never to go out of "reserve". I thought that putting a timeout on the reserve would help it, but it didn't. As it seems, the client issues a command and blocks, it never checks the timeout. It just issues a reserve-with-timeout (instead of a simple reserve) and it is the servers responsibility to return a TIME_OUT as the timeout occurs. The problem is, the connection is broken (but the TCP/IP doesn't know about that yet until any of the sides try to talk to the other side) and if the client blocked reading, it will never return. The library seems to have support for some kind of timeouts locally (for example when trying to connect to server), but it does not seem to contemplate this scenario. How could I detect the stale connection and force a reconnect? Is there some kind of keepalive on the protocol (and on the pheanstalk itself)? Thanks!

    Read the article

  • JqueryUI dialog box causes button to lose styling

    - by superexsl
    Hey everyone, I'm using JQueryUI dialog boxes and, while it works, it seems to remove any CSS styling on the button that opens the dialog. I have to manually refresh the page to get the styling back. An example button which launches the dialog: (button_submit is my CSS theme and launc_popup is used to detect the button click in JQuery. <asp:Button ID="btnLaunch" CssClass="button_sub launch_popup" runat="server" Text="Dialog..." CausesValidation="false" OnClientClick="return false;" /> My jquery: $('.launch_popup').click(function() { $("#dialog-form").dialog("open"); }); My dialog-form: $("#dialog-form").dialog({ autoOpen: false, height: 300, width: 300, modal: true, buttons: { 'Close': function() { $(this).dialog('close'); } } }); This happens to all the buttons that open dialogs. Is there a way to 'restyle' the button without refreshing the page? (It's in an updatepanel if that makes a difference, although this bit's client-side so not sure if that should affect it). Thanks for any help

    Read the article

  • How to set the VirtualDocumentRoot based on the files within

    - by Chuck Vose
    I'm trying to set up Apache to use the VirtualDocumentRoot directive but my sites aren't all exactly the same. Most of the sites have a drupal folder which should be the root but there are a few really old drupal sites, a few rails sites, some django sites, etc. that want the Document root to be / or some other folder. Is there a way to set up VirtualDocumentRoot based on a conditional or is there a way to use RewriteRule/Cond to detect that / is the incorrect folder if there is a drupal folder or a public folder? Here's what I have so far: <VirtualHost *:80> # Wildcard ServerAlias, this is the default vhost if no specific vhost matches first. ServerAlias *.unicorn.devserver.com # Automatic ServerName, based on the HTTP_HOST header. UseCanonicalName Off # Automatic DocumentRoot. This uses the 4th level domain name as the document root, # for example http://bar.foo.baz.com/ would respond with /Users/vosechu/Sites/bar/drupal. VirtualDocumentRoot /Users/vosechu/Sites/%-4/drupal </VirtualHost> Thanks in advance! -Chuck

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >