Search Results

Search found 6342 results on 254 pages for 'behavior'.

Page 251/254 | < Previous Page | 247 248 249 250 251 252 253 254  | Next Page >

  • Asp.Net MVC and ajax async callback execution order

    - by lrb
    I have been sorting through this issue all day and hope someone can help pinpoint my problem. I have created a "asynchronous progress callback" type functionality in my app using ajax. When I strip the functionality out into a test application I get the desired results. See image below: Desired Functionality When I tie the functionality into my single page application using the same code I get a sort of blocking issue where all requests are responded to only after the last task has completed. In the test app above all request are responded to in order. The server reports a ("pending") state for all requests until the controller method has completed. Can anyone give me a hint as to what could cause the change in behavior? Not Desired Desired Fiddler Request/Response GET http://localhost:12028/task/status?_=1383333945335 HTTP/1.1 X-ProgressBar-TaskId: 892183768 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:12028/ Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:12028 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 3.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcVEVNUFxQcm9ncmVzc0Jhclx0YXNrXHN0YXR1cw==?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:39:08 GMT Content-Length: 25 Iteration completed... Not Desired Fiddler Request/Response GET http://localhost:60171/_Test/status?_=1383341766884 HTTP/1.1 X-ProgressBar-TaskId: 838217998 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:60171/Report/Index Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:60171 Pragma: no-cache Cookie: ASP.NET_SessionId=rjli2jb0wyjrgxjqjsicdhdi; AspxAutoDetectCookieSupport=1; TTREPORTS_1_0=CC2A501EF499F9F...; __RequestVerificationToken=6klOoK6lSXR51zCVaDNhuaF6Blual0l8_JH1QTW9W6L-3LroNbyi6WvN6qiqv-PjqpCy7oEmNnAd9s0UONASmBQhUu8aechFYq7EXKzu7WSybObivq46djrE1lvkm6hNXgeLNLYmV0ORmGJeLWDyvA2 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 4.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcSUxlYXJuLlJlcG9ydHMuV2ViXHRydW5rXElMZWFybi5SZXBvcnRzLldlYlxfVGVzdFxzdGF0dXM=?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:37:48 GMT Content-Length: 25 Iteration completed... The only difference in the two requests headers besides the auth tokens is "Pragma: no-cache" in the request and the asp.net version in the response. Thanks Update - Code posted (I probably need to indicate this code originated from an article by Dino Esposito ) var ilProgressWorker = function () { var that = {}; that._xhr = null; that._taskId = 0; that._timerId = 0; that._progressUrl = ""; that._abortUrl = ""; that._interval = 500; that._userDefinedProgressCallback = null; that._taskCompletedCallback = null; that._taskAbortedCallback = null; that.createTaskId = function () { var _minNumber = 100, _maxNumber = 1000000000; return _minNumber + Math.floor(Math.random() * _maxNumber); }; // Set progress callback that.callback = function (userCallback, completedCallback, abortedCallback) { that._userDefinedProgressCallback = userCallback; that._taskCompletedCallback = completedCallback; that._taskAbortedCallback = abortedCallback; return this; }; // Set frequency of refresh that.setInterval = function (interval) { that._interval = interval; return this; }; // Abort the operation that.abort = function () { // if (_xhr !== null) // _xhr.abort(); if (that._abortUrl != null && that._abortUrl != "") { $.ajax({ url: that._abortUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId } }); } }; // INTERNAL FUNCTION that._internalProgressCallback = function () { that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); $.ajax({ url: that._progressUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, success: function (status) { if (that._userDefinedProgressCallback != null) that._userDefinedProgressCallback(status); }, complete: function (data) { var i=0; }, }); }; // Invoke the URL and monitor its progress that.start = function (url, progressUrl, abortUrl) { that._taskId = that.createTaskId(); that._progressUrl = progressUrl; that._abortUrl = abortUrl; // Place the Ajax call _xhr = $.ajax({ url: url, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, complete: function () { if (_xhr.status != 0) return; if (that._taskAbortedCallback != null) that._taskAbortedCallback(); that.end(); }, success: function (data) { if (that._taskCompletedCallback != null) that._taskCompletedCallback(data); that.end(); } }); // Start the progress callback (if any) if (that._userDefinedProgressCallback == null || that._progressUrl === "") return this; that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); }; // Finalize the task that.end = function () { that._taskId = 0; window.clearTimeout(that._timerId); } return that; };

    Read the article

  • understand SimpleTimeZone and DST Test

    - by Cygnusx1
    I Have an issue with the use of SimpleTimeZone class in Java. First, the JavaDoc is nice but not quite easy to understand in regards of the start and end Rules. But with the help of some example found on the web, i managed to get it right (i still don't understand why 8 represents the second week of a month in day_of_month!!! but whatever) Now i have written a simple Junit test to validate what i understand: package test; import static org.junit.Assert.assertEquals; import java.sql.Timestamp; import java.util.Calendar; import java.util.GregorianCalendar; import java.util.SimpleTimeZone; import org.apache.log4j.Logger; import org.junit.Test; public class SimpleTimeZoneTest { Logger log = Logger.getLogger(SimpleTimeZoneTest.class); @Test public void testTimeZoneWithDST() throws Exception { Calendar testDateEndOut = new GregorianCalendar(2012, Calendar.NOVEMBER, 4, 01, 59, 59); Calendar testDateEndIn = new GregorianCalendar(2012, Calendar.NOVEMBER, 4, 02, 00, 00); Calendar testDateStartOut = new GregorianCalendar(2012, Calendar.MARCH, 11, 01, 59, 59); Calendar testDateStartIn = new GregorianCalendar(2012, Calendar.MARCH, 11, 02, 00, 00); SimpleTimeZone est = new SimpleTimeZone(-5 * 60 * 60 * 1000, "EST"); est.setStartRule(Calendar.MARCH, 8, -Calendar.SUNDAY, 2 * 60 * 60 * 1000); est.setEndRule(Calendar.NOVEMBER, 1, Calendar.SUNDAY, 2 * 60 * 60 * 1000); Calendar theCal = new GregorianCalendar(est); theCal.setTimeInMillis(testDateEndOut.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("End date Should be In DST", true, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateEndIn.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("End date Should be Out DST", false, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateStartIn.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("Start date Should be in DST", true, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateStartOut.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("Start date Should be Out DST", false, theCal.getTimeZone().inDaylightTime(theCal.getTime())); } } Ok, i want to test the date limits to see if the inDaylightTime return the right thing! So, my rules are : DST start the second sunday of March at 2am DST end the first sunday of november at 2am In 2012 (now) this give us the march 11 at 2am and November 4 at 2am You can see my test dates are set properly!!! Well here is the output of my test run: 2012-11-01 18:22:44,344 INFO [test.SimpleTimeZoneTest] - < Cal date = 2012-11-04 01:59:59.0 : Eastern Standard Time> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - < Cal use DST = true> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - < Cal In DST = false> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - <offset = -18000000> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - <DTS offset= 3600000> My first assert just fails and tell me that 2012-11-04 01:59:59 is not inDST... !!!!??? If i put 2012-11-04 00:59:59, the test pass! This 1 hour gap just puzzle me... can anyone explain this behavior? Oh, btw, if anyone could elaborate on the : est.setStartRule(Calendar.MARCH, 8, -Calendar.SUNDAY, 2 * 60 * 60 * 1000); Why 8 means second week of march... and the -SUNDAY. I can't figure out this thing on a real calendar example!!! Thanks

    Read the article

  • [ebp + 6] instead of +8 in a JIT compiler

    - by David Titarenco
    I'm implementing a simplistic JIT compiler in a VM I'm writing for fun (mostly to learn more about language design) and I'm getting some weird behavior, maybe someone can tell me why. First I define a JIT "prototype" both for C and C++: #ifdef __cplusplus typedef void* (*_JIT_METHOD) (...); #else typedef (*_JIT_METHOD) (); #endif I have a compile() function that will compile stuff into ASM and stick it somewhere in memory: void* compile (void* something) { // grab some memory unsigned char* buffer = (unsigned char*) malloc (1024); // xor eax, eax // inc eax // inc eax // inc eax // ret -> eax should be 3 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0x40; buffer[5] = 0x67; buffer[6] = 0x40; buffer[7] = 0x67; buffer[8] = 0x40; buffer[9] = 0xC3; */ // xor eax, eax // mov eax, 9 // ret 4 -> eax should be 9 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0xB8; buffer[5] = 0x09; buffer[6] = 0x00; buffer[7] = 0x00; buffer[8] = 0x00; buffer[9] = 0xC3; */ // push ebp // mov ebp, esp // mov eax, [ebp + 6] ; wtf? shouldn't this be [ebp + 8]!? // mov esp, ebp // pop ebp // ret -> eax should be the first value sent to the function /* WORKS! */ buffer[0] = 0x66; buffer[1] = 0x55; buffer[2] = 0x66; buffer[3] = 0x89; buffer[4] = 0xE5; buffer[5] = 0x66; buffer[6] = 0x66; buffer[7] = 0x8B; buffer[8] = 0x45; buffer[9] = 0x06; buffer[10] = 0x66; buffer[11] = 0x89; buffer[12] = 0xEC; buffer[13] = 0x66; buffer[14] = 0x5D; buffer[15] = 0xC3; // mov eax, 5 // add eax, ecx // ret -> eax should be 50 /* WORKS! buffer[0] = 0x67; buffer[1] = 0xB8; buffer[2] = 0x05; buffer[3] = 0x00; buffer[4] = 0x00; buffer[5] = 0x00; buffer[6] = 0x66; buffer[7] = 0x01; buffer[8] = 0xC8; buffer[9] = 0xC3; */ return buffer; } And finally I have the main chunk of the program: void main (int argc, char **args) { DWORD oldProtect = (DWORD) NULL; int i = 667, j = 1, k = 5, l = 0; // generate some arbitrary function _JIT_METHOD someFunc = (_JIT_METHOD) compile(NULL); // windows only #if defined _WIN64 || defined _WIN32 // set memory permissions and flush CPU code cache VirtualProtect(someFunc,1024,PAGE_EXECUTE_READWRITE, &oldProtect); FlushInstructionCache(GetCurrentProcess(), someFunc, 1024); #endif // this asm just for some debugging/testing purposes __asm mov ecx, i // run compiled function (from wherever *someFunc is pointing to) l = (int)someFunc(i, k); // did it work? printf("result: %d", l); free (someFunc); _getch(); } As you can see, the compile() function has a couple of tests I ran to make sure I get expected results, and pretty much everything works but I have a question... On most tutorials or documentation resources, to get the first value of a function passed (in the case of ints) you do [ebp+8], the second [ebp+12] and so forth. For some reason, I have to do [ebp+6] then [ebp+10] and so forth. Could anyone tell me why?

    Read the article

  • JavaFX FXML communication between Application and Controller classes

    - by likethesky
    I am trying to get and destroy an external process I've created via ProcessBuilder in my FXML application close, but it's not working. This is based on the helpful advice Sergey Grinev gave me here. I have tried running with/without the "// myController.setApp(this);" and with "// super.stop();" at top of subclass and at bottom (see commented out/in for that line in MyApp), but no combination works. This probably isn't related to FXML or JavaFX, though I imagine this is a common pattern for developing apps on JavaFX. I suppose I'm asking for a Java best practice for closing dependent processes in a UI-based app like this one (in this case: FXML / JavaFX based), where there is a controller class and an application class. Can you explain what I'm doing wrong? Or better: advise what I should be doing instead? Thanks. In my Application I do this: public class MyApp extends Application { @Override public void start(Stage primaryStage) throws Exception { FXMLLoader fxmlLoader = new FXMLLoader(); Scene scene = (Scene)FXMLLoader.load(getClass().getResource("MyApp.fxml")); MyAppController myController = (MyAppController)fxmlLoader.getController(); primaryStage.setScene(scene); primaryStage.show(); // myController.setApp(this); } @Override public void stop() throws Exception { // super.stop(); // this is called on fx app close, you may call it in an action handler too if (MyAppController.getScriptProcess() != null) { MyAppController.getScriptProcess().destroy(); } super.stop(); } public static void main(String[] args) { launch(args); } } In my Controller I do this: public class MyAppController implements Initializable { private Application app; private static Process scriptProcess; public void setApp(Application a) { app = a; } public static Process getScriptProcess() { return scriptProcess; } } The result when I run with the "commented-out setApp()" not commented out (that is, left in the start method), is the following, immediately upon launch (the main Scene flashes, then disappears, then this dialog appears: "JavaFX Launcher Error: Exception while running Application" And it gives an, "Exception in Application start method" in the console as well. The result when I leave out the "commented-out code" in my MyApp above (that is, remove the "setApp()" from the start method), is that my app does indeed close, but gives this error when it closes: Exception in thread "JavaFX Application Thread" java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at javafx.fxml.FXMLLoader$ControllerMethodEventHandler.handle(FXMLLoader.java:1440) at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:69) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:217) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:170) at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:38) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:37) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:53) at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:28) at javafx.event.Event.fireEvent(Event.java:171) at javafx.scene.Node.fireEvent(Node.java:6863) at javafx.scene.control.Button.fire(Button.java:179) at com.sun.javafx.scene.control.behavior.ButtonBehavior.mouseReleased(ButtonBehavior.java:193) at com.sun.javafx.scene.control.skin.SkinBase$4.handle(SkinBase.java:336) at com.sun.javafx.scene.control.skin.SkinBase$4.handle(SkinBase.java:329) at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:64) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:217) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:170) at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:38) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:37) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:35) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:92) at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:53) at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:33) at javafx.event.Event.fireEvent(Event.java:171) at javafx.scene.Scene$MouseHandler.process(Scene.java:3324) at javafx.scene.Scene$MouseHandler.process(Scene.java:3164) at javafx.scene.Scene$MouseHandler.access$1900(Scene.java:3119) at javafx.scene.Scene.impl_processMouseEvent(Scene.java:1559) at javafx.scene.Scene$ScenePeerListener.mouseEvent(Scene.java:2261) at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleMouseEvent(GlassViewEventHandler.java:228) at com.sun.glass.ui.View.handleMouseEvent(View.java:528) at com.sun.glass.ui.View.notifyMouse(View.java:922) at com.sun.glass.ui.gtk.GtkApplication._runLoop(Native Method) at com.sun.glass.ui.gtk.GtkApplication$3$1.run(GtkApplication.java:82) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at javafx.fxml.FXMLLoader$ControllerMethodEventHandler.handle(FXMLLoader.java:1435) ... 44 more Caused by: java.lang.NullPointerException at mypackage.MyController.handleCancel(MyController.java:300) ... 49 more Clean up...

    Read the article

  • Users being forced to re-login randomly, before session and auth ticket timeout values are reached

    - by Don
    I'm having reports and complaints from my user that they will be using a screen and get kicked back to the login screen immediately on their next request. It doesn't happen all the time but randomly. After looking at the Web server the error that shows up in the application event log is: Event code: 4005 Event message: Forms authentication failed for the request. Reason: The ticket supplied has expired. Everything that I read starts out with people asking about web gardens or load balancing. We are not using either of those. We're a single Windows 2003 (32-bit OS, 64-bit hardware) Server with IIS6. This is the only website on this server too. This behavior does not generate any application exceptions or visible issues to the user. They just get booted back to the login screen and are forced to login. As you can imagine this is extremely annoying and counter-productive for our users. Here's what I have set in my web.config for the application in the root: <authentication mode="Forms"> <forms name=".TcaNet" protection="All" timeout="40" loginUrl="~/Login.aspx" defaultUrl="~/MyHome.aspx" path="/" slidingExpiration="true" requireSSL="false" /> </authentication> I have also read that if you have some locations setup that no longer exist or are bogus you could have issues. My path attributes are all valid directories so that shouldn't be the problem: <location path="js"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="images"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="anon"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="App_Themes"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="NonSSL"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> The only thing I'm not clear on is if my timeout value in the forms property for the auth ticket has to be the same as my session timeout value (defined in the app's configuration in IIS). I've read some things that say you should have the authentication timeout shorter (40) than the session timeout (45) to avoid possible complications. Either way we have users that get kicked to the login screen a minute or two after their last action. So the session definitely should not be expiring. Update 2/23/09: I've since set the session timeout and authentication ticket timeout values to both be 45 and the problem still seems to be happening. The only other web.config in the application is in 1 virtual directory that hosts Community Server. That web.config's authentication settings are as follows: <authentication mode="Forms"> <forms name=".TcaNet" protection="All" timeout="40" loginUrl="~/Login.aspx" defaultUrl="~/MyHome.aspx" path="/" slidingExpiration="true" requireSSL="true" /> </authentication> And while I don't believe it applies unless you're in a web garden, I have both of the machine key values set in both web.config files to be the same (removed for convenience): <machineKey validationKey="<MYVALIDATIONKEYHERE>" decryptionKey="<MYDECRYPTIONKEYHERE>" validation="SHA1" /> <machineKey validationKey="<MYVALIDATIONKEYHERE>" decryptionKey="<MYDECRYPTIONKEYHERE>" validation="SHA1"/> Any help with this would be greatly appreciated. This seems to be one of those problems that yields a ton of Google results, none of which seem to be fitting into my situation so far.

    Read the article

  • Cascading updates with business key equality: Hibernate best practices?

    - by Traphicone
    I'm new to Hibernate, and while there are literally tons of examples to look at, there seems to be so much flexibility here that it's sometimes very hard to narrow all the options down the best way of doing things. I've been working on a project for a little while now, and despite reading through a lot of books, articles, and forums, I'm still left with a bit of a head scratcher. Any veteran advice would be very appreciated. So, I have a model involving two classes with a one-to-many relationship from parent to child. Each class has a surrogate primary key and a uniquely constrained composite business key. <class name="Container"> <id name="id" type="java.lang.Long"> <generator class="identity"/> </id> <properties name="containerBusinessKey" unique="true" update="false"> <property name="name" not-null="true"/> <property name="owner" not-null="true"/> </properties> <set name="items" inverse="true" cascade="all-delete-orphan"> <key column="container" not-null="true"/> <one-to-many class="Item"/> </set> </class> <class name="Item"> <id name="id" type="java.lang.Long"> <generator class="identity"/> </id> <properties name="itemBusinessKey" unique="true" update="false"> <property name="type" not-null="true"/> <property name="color" not-null="true"/> </properties> <many-to-one name="container" not-null="true" update="false" class="Container"/> </class> The beans behind these mappings are as boring as you can possibly imagine--nothing fancy going on. With that in mind, consider the following code: Container c = new Container("Things", "Me"); c.addItem(new Item("String", "Blue")); c.addItem(new Item("Wax", "Red")); Transaction t = session.beginTransaction(); session.saveOrUpdate(c); t.commit(); Everything works fine the first time, and both the Container and its Items are persisted. If the above code block is executed again, however, Hibernate throws a ConstraintViolationException--duplicate values for the "name" and "owner" columns. Because the new Container instance has a null identifier, Hibernate assumes it is an unsaved transient instance. This is expected but not desired. Since the persistent and transient Container objects have the same business key values, what we really want is to issue an update. It is easy enough to convince Hibernate that our new Container instance is the same as our old one. With a quick query we can get the identifier of the Container we'd like to update, and set our transient object's identifier to match. Container c = new Container("Things", "Me"); c.addItem(new Item("String", "Blue")); c.addItem(new Item("Wax", "Red")); Query query = session.createSQLQuery("SELECT id FROM Container" + "WHERE name = ? AND owner = ?"); query.setString(0, c.getName()); query.setString(1, c.getOwner()); BigInteger id = (BigInteger)query.uniqueResult(); if (id != null) { c.setId(id.longValue()); } Transaction t = session.beginTransaction(); session.saveOrUpdate(c); t.commit(); This almost satisfies Hibernate, but because the one-to-many relationship from Container to Item cascades, the same ConstraintViolationException is also thrown for the child Item objects. My question is: what is the best practice in this situation? It is highly recommended to use surrogate primary keys, and it is also recommended to use business key equality. When you put these two recommendations in to practice together, however, two of the greatest conveniences of Hibernate--saveOrUpdate and cascading operations--seem to be rendered almost completely useless. As I see it, I have only two options: Manually fetch and set the identifier for each object in the mapping. This clearly works, but for even a moderately sized schema this is a lot of extra work which it seems Hibernate could easily be doing. Write a custom interceptor to fetch and set object identifiers on each operation. This looks cleaner than the first option but is rather heavy-handed, and it seems wrong to me that you should be expected to write a plug-in which overrides Hibernate's default behavior for a mapping which follows the recommended design. Is there a better way? Am I making completely the wrong assumptions? I'm hoping that I'm just missing something. Thanks.

    Read the article

  • Neural Network Always Produces Same/Similar Outputs for Any Input

    - by l33tnerd
    I have a problem where I am trying to create a neural network for Tic-Tac-Toe. However, for some reason, training the neural network causes it to produce nearly the same output for any given input. I did take a look at Artificial neural networks benchmark, but my network implementation is built for neurons with the same activation function for each neuron, i.e. no constant neurons. To make sure the problem wasn't just due to my choice of training set (1218 board states and moves generated by a genetic algorithm), I tried to train the network to reproduce XOR. The logistic activation function was used. Instead of using the derivative, I multiplied the error by output*(1-output) as some sources suggested that this was equivalent to using the derivative. I can put the Haskell source on HPaste, but it's a little embarrassing to look at. The network has 3 layers: the first layer has 2 inputs and 4 outputs, the second has 4 inputs and 1 output, and the third has 1 output. Increasing to 4 neurons in the second layer didn't help, and neither did increasing to 8 outputs in the first layer. I then calculated errors, network output, bias updates, and the weight updates by hand based on http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf to make sure there wasn't an error in those parts of the code (there wasn't, but I will probably do it again just to make sure). Because I am using batch training, I did not multiply by x in equation (4) there. I am adding the weight change, though http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-2.html suggests to subtract it instead. The problem persisted, even in this simplified network. For example, these are the results after 500 epochs of batch training and of incremental training. Input |Target|Output (Batch) |Output(Incremental) [1.0,1.0]|[0.0] |[0.5003781562785173]|[0.5009731800870864] [1.0,0.0]|[1.0] |[0.5003740346965251]|[0.5006347214672715] [0.0,1.0]|[1.0] |[0.5003734471544522]|[0.500589332376345] [0.0,0.0]|[0.0] |[0.5003674110937019]|[0.500095157458231] Subtracting instead of adding produces the same problem, except everything is 0.99 something instead of 0.50 something. 5000 epochs produces the same result, except the batch-trained network returns exactly 0.5 for each case. (Heck, even 10,000 epochs didn't work for batch training.) Is there anything in general that could produce this behavior? Also, I looked at the intermediate errors for incremental training, and the although the inputs of the hidden/input layers varied, the error for the output neuron was always +/-0.12. For batch training, the errors were increasing, but extremely slowly and the errors were all extremely small (x10^-7). Different initial random weights and biases made no difference, either. Note that this is a school project, so hints/guides would be more helpful. Although reinventing the wheel and making my own network (in a language I don't know well!) was a horrible idea, I felt it would be more appropriate for a school project (so I know what's going on...in theory, at least. There doesn't seem to be a computer science teacher at my school). EDIT: Two layers, an input layer of 2 inputs to 8 outputs, and an output layer of 8 inputs to 1 output, produces much the same results: 0.5+/-0.2 (or so) for each training case. I'm also playing around with pyBrain, seeing if any network structure there will work. Edit 2: I am using a learning rate of 0.1. Sorry for forgetting about that. Edit 3: Pybrain's "trainUntilConvergence" doesn't get me a fully trained network, either, but 20000 epochs does, with 16 neurons in the hidden layer. 10000 epochs and 4 neurons, not so much, but close. So, in Haskell, with the input layer having 2 inputs & 2 outputs, hidden layer with 2 inputs and 8 outputs, and output layer with 8 inputs and 1 output...I get the same problem with 10000 epochs. And with 20000 epochs. Edit 4: I ran the network by hand again based on the MIT PDF above, and the values match, so the code should be correct unless I am misunderstanding those equations. Some of my source code is at http://hpaste.org/42453/neural_network__not_working; I'm working on cleaning my code somewhat and putting it in a Github (rather than a private Bitbucket) repository. All of the relevant source code is now at https://github.com/l33tnerd/hsann.

    Read the article

  • Sorting/Paginating/Filtering Complex Multi-AR Object Tables in Rails

    - by Matt Rogish
    I have a complex table pulled from a multi-ActiveRecord object array. This listing is a combined display of all of a particular user's "favorite" items (songs, messages, blog postings, whatever). Each of these items is a full-fledged AR object. My goal is to present the user with a simplified search, sort, and pagination interface. The user need not know that the Song has a singer, and that the Message has an author -- to the end user both entries in the table will be displayed as "User". Thus, the search box will simply be a dropdown list asking them which to search on (User name, created at, etc.). Internally, I would need to convert that to the appropriate object search, combine the results, and display. I can, separately, do pagination (mislav will_paginate), sorting, and filtering, but together I'm having some problems combining them. For example, if I paginate the combined list of items, the pagination plugin handles it just fine. It is not efficient since the pagination is happening in the app vs. the DB, but let's assume the intended use-case would indicate the vast majority of the users will have less than 30 favorited items and all other behavior, server capabilities, etc. indicates this will not be a bottleneck. However, if I wish to sort the list I cannot sort it via the pagination plugin because it relies on the assumption that the result set is derived from a single SQL query, and also that the field name is consistent throughout. Thus, I must sort the merged array via ruby, e.g. @items.sort_by{ |i| i.whatever } But, since the items do not share common names, I must first interrogate the object and then call the correct sort by. For example, if the user wishes to sort by user name, if the sorted object is a message, I sort by author but if the object is a song, I sort by singer. This is all very gross and feels quite un-ruby-like. This same problem comes into play with the filter. If the user filters on the "parent item" (the message's thread, the song's album), I must translate that to the appropriate collection object method. Also gross. This is not the exact set-up but is close enough. Note that this is a legacy app so changing it is quite difficult, although not impossible. Also, yes there is some DRY that can be done, but don't focus on the style or elegance of the following code. Style/elegance of the SOLUTION is important, however! :D models: class User < ActiveRecord::Base ... has_and_belongs_to_many :favorite_messages, :class_name => "Message" has_and_belongs_to_many :favorite_songs, :class_name => "Song" has_many :authored_messages, :class_name => "Message" has_many :sung_songs, :class_name => "Song" end class Message < ActiveRecord::Base has_and_belongs_to_many :favorite_messages belongs_to :author, :class_name => "User" belongs_to :thread end class Song < ActiveRecord::Base has_and_belongs_to_many :favorite_songs belongs_to :singer, :class_name => "User" belongs_to :album end controller: def show u = User.find 123 @items = Array.new @items << u.favorite_messages @items << u.favorite_songs # etc. etc. @items.flatten! @items = @items.sort_by{ |i| i.created_at } @items = @items.paginate :page => params[:page], :per_page => 20 end def search # Assume user is searching for username like 'Bob' u = User.find 123 @items = Array.new @items << u.favorite_messages.find( :all, :conditions => "LOWER( author ) LIKE LOWER('%bob%')" ) @items << u.favorite_songs.find( :all, :conditions => "LOWER( singer ) LIKE ... " ) # etc. etc. @items.flatten! @items = @items.sort_by{ |i| determine appropriate sorting based on user selection } @items = @items.paginate :page => params[:page], :per_page => 20 end view: #index.html.erb ... <table> <tr> <th>Title (sort ASC/DESC links)</th> <th>Created By (sort ASC/DESC links))</th> <th>Collection Title (sort ASC/DESC links)</th> <th>Created At (sort ASC/DESC links)</th> </tr> <% @items.each |item| do %> <%= render { :partial => "message", :locals => item } if item.is_a? Message %> <%= render { :partial => "song", :locals => item } if item.is_a? Song %> <%end%> ... </table> #message.html.erb # shorthand, not real ruby print out message title, author name, thread title, message created at #song.html.erb # shorthand print out song title, singer name, album title, song created at

    Read the article

  • How to associate Wi-Fi beacon info with a virtual "location"?

    - by leander
    We have a piece of embedded hardware that will sense 802.11 beacons, and we're using this to make a map of currently visible bssid -> signalStrength. Given this map, we would like to make a determination: Is this likely to be a location I have been to before? If so, what is its ID? If not, I should remember this location: generate a new ID. Now what should I store (and how should I store it) to make future determinations easier? This is for an augmented-reality app/game. We will be using it to associate particular characters and events with "locations". The device does not have internet or cellular access, so using a geolocation service is out of consideration for the time being. (We don't really need to know where we are in reality, just be able to determine if we return there.) It isn't crucial that it be extremely accurate, but it would be nice if it was tolerant to signal strength changes or the occasional missing beacon. It should be usable in relatively low numbers of access points (e.g. rural house with one wireless router) or many (wandering around a dense metropolis). In the case of a city, it should change location every few minutes of walking (continuously-overlapping signals make this a bit more tricky in naive code). A reasonable number of false positives (match a location when we aren't actually there) is acceptable. The wrong character/event showing up just adds a bit of variety. False negatives (no location match) are a bit more troublesome: this will tend to add a better-matching new location to the saved locations, masking the old one. While we will have additional logic to ensure locations that the device hasn't seen in a while will "orphan" any associated characters or events (if e.g. you move to a different country), we'd prefer not to mask and eventually orphan locations you do visit regularly. Some technical complications: signalStrength is returned as 1-4; presumably it's related to dB, but we are not sure exactly how; in my experiments it tends to stick to either 1 or 4, but occasionally we see numbers in between. (Tech docs on the hardware are sparse.) The device completes a scan of one-quarter of the channel space every second; so it takes about 4-5 seconds to get a complete picture of what's around. The list isn't always complete. (We are making strides to fix this using some slight sampling period randomization, as recommended by the library docs. We're also investigating ways to increase the number of scans without killing our performance; the hardware/libs are poorly behaved when it comes to saturating the bus.) We have only kilobytes to store our history. We have a "working" impl now, but it is relatively naive, and flaky in the face of real-world Wi-Fi behavior. Rough pseudocode: // recordLocation() -- only store strength 4 locations m_savedLocations[g_nextId++] = filterForStrengthGE( m_currentAPs, 4 ); // determineLocation() bestPoints = -inf; foreach ( oldLoc in m_savedLocations ) { points = 0.0; foreach ( ap in m_currentAPs ) { if ( oldLoc.has( ap ) ) { switch ( ap.signalStrength ) { case 3: points += 1.0; break; case 4: points += 2.0; break; } } } points /= oldLoc.numAPs; if ( points > bestPoints ) { bestLoc = oldLoc; bestPoints = points; } } if ( bestLoc && bestPoints > 1.0 ) { if ( bestPoints >= (2.0 - epsilon) ) { // near-perfect match. // update location with any new high-strength APs that have appeared bestLoc.addAPs( filterForStrengthGE( m_currentAPs, 4 ) ); } return bestLoc; } else { return NO_MATCH; } We record a location currently only when we have NO_MATCH and the app determines it's time for a new event. (The "near-perfect match" code above would appear to make it harder to match in the future... It's mostly to keep new powerful APs from being associated with other locations, but you'd think we'd need something to counter this if e.g. an AP doesn't show up in the next 10 times I match a location.) I have a feeling that we're missing some things from set theory or graph theory that would assist in grouping/classification of this data, and perhaps providing a better "confidence level" on matches, and better robustness against missed beacons, signal strength changes, and the like. Also it would be useful to have a good method for mutating locations over time. Any useful resources out there for this sort of thing? Simple and/or robust approaches we're missing?

    Read the article

  • Cascading S3 Sink Tap not being deleted with SinkMode.REPLACE

    - by Eric Charles
    We are running Cascading with a Sink Tap being configured to store in Amazon S3 and were facing some FileAlreadyExistsException (see [1]). This was only from time to time (1 time on around 100) and was not reproducable. Digging into the Cascading codem, we discovered the Hfs.deleteResource() is called (among others) by the BaseFlow.deleteSinksIfNotUpdate(). Btw, we were quite intrigued with the silent NPE (with comment "hack to get around npe thrown when fs reaches root directory"). From there, we extended the Hfs tap with our own Tap to add more action in the deleteResource() method (see [2]) with a retry mechanism calling directly the getFileSystem(conf).delete. The retry mechanism seemed to bring improvement, but we are still sometimes facing failures (see example in [3]): it sounds like HDFS returns isDeleted=true, but asking directly after if the folder exists, we receive exists=true, which should not happen. Logs also shows randomly isDeleted true or false when the flow succeeds, which sounds like the returned value is irrelevant or not to be trusted. Can anybody bring his own S3 experience with such a behavior: "folder should be deleted, but it is not"? We suspect a S3 issue, but could it also be in Cascading or HDFS? We run on Hadoop Cloudera-cdh3u5 and Cascading 2.0.1-wip-dev. [1] org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory s3n://... already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132) at com.twitter.elephantbird.mapred.output.DeprecatedOutputFormatWrapper.checkOutputSpecs(DeprecatedOutputFormatWrapper.java:75) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:923) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:882) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:882) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:856) at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:104) at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:174) at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:137) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:122) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:42) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.j [2] @Override public boolean deleteResource(JobConf conf) throws IOException { LOGGER.info("Deleting resource {}", getIdentifier()); boolean isDeleted = super.deleteResource(conf); LOGGER.info("Hfs Sink Tap isDeleted is {} for {}", isDeleted, getIdentifier()); Path path = new Path(getIdentifier()); int retryCount = 0; int cumulativeSleepTime = 0; int sleepTime = 1000; while (getFileSystem(conf).exists(path)) { LOGGER .info( "Resource {} still exists, it should not... - I will continue to wait patiently...", getIdentifier()); try { LOGGER.info("Now I will sleep " + sleepTime / 1000 + " seconds while trying to delete {} - attempt: {}", getIdentifier(), retryCount + 1); Thread.sleep(sleepTime); cumulativeSleepTime += sleepTime; sleepTime *= 2; } catch (InterruptedException e) { e.printStackTrace(); LOGGER .error( "Interrupted while sleeping trying to delete {} with message {}...", getIdentifier(), e.getMessage()); throw new RuntimeException(e); } if (retryCount == 0) { getFileSystem(conf).delete(getPath(), true); } retryCount++; if (cumulativeSleepTime > MAXIMUM_TIME_TO_WAIT_TO_DELETE_MS) { break; } } if (getFileSystem(conf).exists(path)) { LOGGER .error( "We didn't succeed to delete the resource {}. Throwing now a runtime exception.", getIdentifier()); throw new RuntimeException( "Although we waited to delete the resource for " + getIdentifier() + ' ' + retryCount + " iterations, it still exists - This must be an issue in the underlying storage system."); } return isDeleted; } [3] INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] at least one sink is marked for delete INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] sink oldest modified date: Wed Dec 31 23:59:59 UTC 1969 INFO [pool-2-thread-15] (HiveSinkTap.java:148) - Now I will sleep 1 seconds while trying to delete s3n://... - attempt: 1 INFO [pool-2-thread-15] (HiveSinkTap.java:130) - Deleting resource s3n://... INFO [pool-2-thread-15] (HiveSinkTap.java:133) - Hfs Sink Tap isDeleted is true for s3n://... ERROR [pool-2-thread-15] (HiveSinkTap.java:175) - We didn't succeed to delete the resource s3n://... Throwing now a runtime exception. WARN [pool-2-thread-15] (Cascade.java:706) - [...] flow failed: ... java.lang.RuntimeException: Although we waited to delete the resource for s3n://... 0 iterations, it still exists - This must be an issue in the underlying storage system. at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:179) at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:40) at cascading.flow.BaseFlow.deleteSinksIfNotUpdate(BaseFlow.java:971) at cascading.flow.BaseFlow.prepare(BaseFlow.java:733) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:761) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:710) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Splash screen moves up before closing

    - by rturney
    In C# I am having a problem with the splash screen. When it is time to close and the main Form1 appears, it moves over to the upper right corner of Form1. It then disappears. I have never had this occur before and have just about run out of ideas to fix it. I want the splash screen to disappear in the center screen and not move over to the upper corner of the opening Form1. Here is the code: public Form1() { Splash mySplash = new Splash(); mySplash.TotalValue = 7; //or however many steps you must complete mySplash.Show(); mySplash.Update(); InitializeComponent(); //--<begin>-------------- this.Hide(); this.WindowState = FormWindowState.Normal; mySplash.Progress++; printDoc.PrintPage += new PrintPageEventHandler(printDoc_PrintPage); printBOM.PrintPage += new PrintPageEventHandler(printBOM_PrintPage); printList.PrintPage += new PrintPageEventHandler(printList_PrintPage); mySplash.Progress++; // using old Kodak Imaging OCX ! axImgEdit1.Image = "\\\\Netstore\\eng_share\\EView\\BOB-eView9.tif"; axImgEdit1.DisplayScaleAlgorithm = ImgeditLibCtl.DisplayScaleConstants.wiScaleOptimize; axImgEdit1.FitTo(0); axImgEdit1.Display(); mySplash.Progress++; //~~~~~~~~~~~~~~~~~~~~Getting printer info~~~~~~~~~~~~~~~~~~~~~~~~~ List<Win32_Printer> printerList = Win32_Printer.GetList(); int i = 0; foreach (Win32_Printer printer in printerList) { prnName = printer.Name; prnPort = printer.PortName; prnDriver = printer.DriverName; if (i == 0) { prnNameString = prnName; prnDriverString = prnDriver; prnPortString = prnPort; } else { prnNameString += "," + prnName; prnDriverString += "," + prnDriver; prnPortString += "," + prnPort; } i++; } mySplash.Progress++; EViewMethods.defaultPrn[0] = Settings.Default.DefaultPrinter; //defaultPrn[] is string array holding the default printer name, driver and port EViewMethods.defaultPrn[1] = Settings.Default.DefaultPrinterDriver; EViewMethods.defaultPrn[2] = Settings.Default.DefaultPrinterPort; //making this printer the system default printer object printerName = Settings.Default.DefaultPrinter; ManagementObjectSearcher searcher = new ManagementObjectSearcher("SELECT * FROM Win32_Printer"); ManagementObjectCollection collection = searcher.Get(); foreach (ManagementObject currentObject in collection) { if (currentObject["name"].ToString() == printerName.ToString()) { currentObject.InvokeMethod("SetDefaultPrinter", new object[] { printerName }); } } mySplash.Progress++; EViewMethods.reCenterEVafterDwgClose = Settings.Default.ReCenterEVafterDwgClose; if (Settings.Default.ReCenterEVafterDwgClose == true) recenterEViewAfterDrawingViewerClosesToolStripMenuItem.Checked = true; else recenterEViewAfterDrawingViewerClosesToolStripMenuItem.Checked = false; //------------------------------------------------------- EViewMethods.screenBehavior = Settings.Default.ViewStyle; normalToolStripMenuItem.Checked = false; clearViewToolStripMenuItem.Checked = false; clearviewDULevLRToolStripMenuItem.Checked = false; clearviewdULevLLToolStripMenuItem.Checked = false; clearviewdURevULToolStripMenuItem.Checked = false; clearviewdURevLLToolStripMenuItem.Checked = false; clearviewdURevLRToolStripMenuItem.Checked = false; smallScreenToolStripMenuItem.Checked = false; //Form1.ActiveForm.SetDesktopLocation(588, 312); //all screen behavior mode will begin centered on the screen EViewMethods.eviewUserPrefLocation = Settings.Default.FormEviewLocation; //------------------------------------------------------- EViewMethods.syncListToDwgNum = Settings.Default.SyncListDwgNum; if (EViewMethods.syncListToDwgNum == true) synchronizeListToActiveDwgToolStripMenuItem.Checked = true; else synchronizeListToActiveDwgToolStripMenuItem.Checked = false; toolStripStatusLabel1.Text = ""; toolStripStatusLabel2.Text = Settings.Default.ViewStyle; toolStripStatusLabel3.Text = Settings.Default.DefaultPrinter; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Assembly asm = Assembly.GetExecutingAssembly(); AssemblyName asmName = asm.GetName(); EViewMethods.eviewVersion = asmName.Version.ToString(); radioPartInfo.Checked = true; disableAllSearch(); EViewMethods.userName = Environment.UserName; EViewMethods.openConnection(); mySplash.Progress++; EViewMethods.loadFavorites(listFavorites); mySplash.Close(); mySplash.Dispose(); this.Show(); this.ActiveControl = comboEntry; }

    Read the article

  • PHP: Strange behaviour while calling custom php functions

    - by baltusaj
    I am facing a strange behavior while coding in PHP with Flex. Let me explain the situation: I have two funcions lets say: populateTable() //puts some data in a table made with flex createXML() //creates an xml file which is used by Fusion Charts to create a chart Now, if i call populateTable() alone, the table gets populated with data but if i call it with createXML(), the table doesn't get populated but createXML() does it's work i.e. creates an xml file. Even if i run following code, only xml file gets generated but table remains empty whereas i called populateTable() before createXML(). Any idea what may be going wrong? MXML Part <mx:HTTPService id="userRequest" url="request.php" method="POST" resultFormat="e4x"> <mx:request xmlns=""> <getResult>send</getResult> </mx:request> and <mx:DataGrid id="dgUserRequest" dataProvider="{userRequest.lastResult.user}" x="28.5" y="36" width="525" height="250" > <mx:columns> <mx:DataGridColumn headerText="No." dataField="no" /> <mx:DataGridColumn headerText="Name" dataField="name"/> <mx:DataGridColumn headerText="Age" dataField="age"/> </mx:columns> PHP Part <?php //-------------------------------------------------------------------------- function initialize($username,$password,$database) //-------------------------------------------------------------------------- { # Connect to the database $link = mysql_connect("localhost", $username,$password); if (!$link) { die('Could not connected to the database : ' . mysql_error()); } # Select the database $db_selected = mysql_select_db($database, $link); if (!$db_selected) { die ('Could not select the DB : ' . mysql_error()); } // populateTable(); createXML(); # Close database connection } //-------------------------------------------------------------------------- populateTable() //-------------------------------------------------------------------------- { if($_POST['getResult'] == 'send') { $Result = mysql_query("SELECT * FROM session" ); $Return = "<Users>"; $no = 1; while ( $row = mysql_fetch_object( $Result ) ) { $Return .= "<user><no>".$no."</no><name>".$row->name."</name><age>".$row->age."</age><salary>". $row->salary."</salary></session>"; $no=$no+1; $Return .= "</Users>"; mysql_free_result( $Result ); print ($Return); } //-------------------------------------------------------------------------- createXML() //-------------------------------------------------------------------------- { $users=array ( "0"=>array("",0), "1"=>array("Obama",0), "2"=>array("Zardari",0), "3"=>array("Imran Khan",0), "4"=>array("Ahmadenijad",0) ); $selectedUsers=array(1,4); //this means only obama and ahmadenijad are selected and the xml file will contain info related to them only //Extracting salaries of selected users $size=count($users); for($i = 0; $i<$size; $i++) { //initialize temp which will calculate total throughput for each protocol separately $salary = 0; $result = mysql_query("SELECT salary FROM userInfo where name='$users[$selectedUsers[$i]][0]'"); $row = mysql_fetch_array($result)) $salary = $row['salary']; } $users[$selectedUsers[$i]][1]=$salary; } //creating XML string $chartContent = "<chart caption=\"Users Vs Salaries\" formatNumberScale=\"0\" pieSliceDepth=\"30\" startingAngle=\"125\">"; for($i=0;$i<$size;$i++) { $chartContent .= "<set label=\"".$users[$selectedUsers[$i]][0]."\" value=\"".$users[$selectedUsers[$i]][1]."\"/>"; } $chartContent .= "<styles>" . "<definition>" . "<style type=\"font\" name=\"CaptionFont\" size=\"16\" color=\"666666\"/>" . "<style type=\"font\" name=\"SubCaptionFont\" bold=\"0\"/>" . "</definition>" . "<application>" . "<apply toObject=\"caption\" styles=\"CaptionFont\"/>" . "<apply toObject=\"SubCaption\" styles=\"SubCaptionFont\"/>" . "</application>" . "</styles>" . "</chart>"; $file_handle = fopen('ChartData.xml','w'); fwrite($file_handle,$chartContent); fclose($file_handle); } initialize("root","","hiddenpeak"); ?>

    Read the article

  • Silverlight/Web Service Serializing Interface for use Client Side

    - by Steve Brouillard
    I have a Silverlight solution that references a third-party web service. This web service generates XML, which is then processed into objects for use in Silverlight binding. At one point we the processing of XML to objects was done client-side, but we ran into performance issues and decided to move this processing to the proxies in the hosting web project to improve performance (which it did). This is obviously a gross over-simplification, but should work. My basic project structure looks like this. Solution Solution.Web - Holds the web page that hosts Silverlight as well as proxies that access web services and processes as required and obviously the references to those web services). Solution.Infrastructure - Holds references to the proxy web services in the .Web project, all genned code from serialized objects from those proxies and code around those objects that need to be client-side. Solution.Book - The particular project that uses the objects in question after processed down into Infrastructure. I've defined the following Interface and Class in the Web project. They represent the type of objects that the XML from the original third-party gets transformed into and since this is the only project in the Silverlight app that is actually server-side, that was the place to define and use them. //Doesn't get much simpler than this. public interface INavigable { string Description { get; set; } } //Very simple class too public class IndexEntry : INavigable { public List<IndexCM> CMItems { get; set; } public string CPTCode { get; set; } public string DefinitionOfAbbreviations { get; set; } public string Description { get; set; } public string EtiologyCode { get; set; } public bool HighScore { get; set; } public IndexToTabularCommandArguments IndexToTabularCommandArgument { get; set; } public bool IsExpanded { get; set; } public string ManifestationCode { get; set; } public string MorphologyCode { get; set; } public List<TextItem> NonEssentialModifiersAndQualifyingText { get; set; } public string OtherItalics { get; set; } public IndexEntry Parent { get; set; } public int Score { get; set; } public string SeeAlsoReference { get; set; } public string SeeReference { get; set; } public List<IndexEntry> SubEntries { get; set; } public int Words { get; set; } } Again; both of these items are defined in the Web project. Notice that IndexEntry implments INavigable. When the code for IndexEntry is auto-genned in the Infrastructure project, the definition of the class does not include the implmentation of INavigable. After discovering this, I thought "no problem, I'll create another partial class file reiterating the implmentation". Unfortunately (I'm guessing because it isn't being serialized), that interface isn't recognized in the Infrastructure project, so I can't simply do that. Here's where it gets really weird. The BOOK project CAN see the INavigable interface. In fact I use it in Book, though Book has no reference to the Web Service in the Web project where the thing is define, though Infrastructure does. Just as a test, I linked to the INavigable source file from indside the Infrastructure project. That allowed me to reference it in that project and compile, but causes havoc in the Book project, because now there's a conflick between the one define in Infrastructure and the one defined in the Web project's web service. This is behavior I would expect. So, to try and sum up a bit. Web project has a web service that process data from a third-party service and has a class and interface defined in it. The class implements the interface. The Infrastructure project references the web service in the Web Project and the Book project references the Infrastructure project. The implmentation of the interface in the class does NOT serialize down, so the auto-genned code in INfrastructure does not show this relationship, breaking code further down-stream. The Book project, whihc is further down-stream CAN see the interface as defined in the Web Project, even though its only reference is through the Infrastructure project; whihc CAN'T see it. Am I simple missing something easy here? Can I apply an attribute to either the Interface definition or to the its implmentation in the class to ensure its visibility downstream? Anything else I can do here? I know this is a bit convoluted and anyone still with me here, thanks for your patience and any advice you might have. Cheers, Steve

    Read the article

  • PHP, MySQL, jQuery, AJAX: json data returns correct response but frontend returns error

    - by Devner
    Hi all, I have a user registration form. I am doing server side validation on the fly via AJAX. The quick summary of my problem is that upon validating 2 fields, I get error for the second field validation. If I comment first field, then the 2nd field does not show any error. It has this weird behavior. More details below: The HTML, JS and Php code are below: HTML FORM: <form id="SignupForm" action=""> <fieldset> <legend>Free Signup</legend> <label for="username">Username</label> <input name="username" type="text" id="username" /><span id="status_username"></span><br /> <label for="email">Email</label> <input name="email" type="text" id="email" /><span id="status_email"></span><br /> <label for="confirm_email">Confirm Email</label> <input name="confirm_email" type="text" id="confirm_email" /><span id="status_confirm_email"></span><br /> </fieldset> <p> <input id="sbt" type="button" value="Submit form" /> </p> </form> JS: <script type="text/javascript"> $(document).ready(function() { $("#email").blur(function() { var email = $("#email").val(); var msgbox2 = $("#status_email"); if(email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: "email="+ email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } }); } return false; }); $("#confirm_email").blur(function() { var confirm_email = $("#confirm_email").val(); var email = $("#email").val(); var msgbox3 = $("#status_confirm_email"); if(confirm_email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: 'confirm_email='+ confirm_email + '&email=' + email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } , error: function (data) { alert('Some error'); } }); } return false; }); }); </script> PHP code: <?php //check_ajax2.php if(isset($_POST['email'])) { $email = $_POST['email']; $res = mysql_query("SELECT uid FROM members WHERE email = '$email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { $success = 'y'; $msg_email = 'Email available'; } else { $success = 'n'; $msg_email = 'Email is already in use.</font>'; } print json_encode(array('success' => $success, 'msg_email' => $msg_email)); } if(isset($_POST['confirm_email'])) { $confirm_email = $_POST['confirm_email']; $email = ( isset($_POST['email']) && trim($_POST['email']) != '' ? $_POST['email'] : '' ); $res = mysql_query("SELECT uid FROM members WHERE email = '$confirm_email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { if( isset($email) && isset($confirm_email) && $email == $confirm_email ) { $success = 'y'; $msg_confirm_email = 'Email available and match'; } else { $success = 'n'; $msg_confirm_email = 'Email and Confirm Email do NOT match.'; } } else { $success = 'n'; $msg_confirm_email = 'Email already exists.'; } print json_encode(array('success' => $success, 'msg_confirm_email' => $msg_confirm_email)); } ?> THE PROBLEM: As long as I am validating the $_POST['email'] as well as $_POST['confirm_email'] in the check_ajax2.php file, the validation for confirm_email field always returns an error. With my limited knowledge of Firebug, however, I did find out that the following were the responses when I entered email and confirm_email in the fields: RESPONSE 1: {"success":"y","msg_email":"Email available"} RESPONSE 2: {"success":"y","msg_email":"Email available"}{"success":"n","msg_confirm_email":"Email and Confirm Email do NOT match."} Although the RESPONSE 2 shows that we are receiving the correct message via msg_confirm_email, in the front end, the alert 'Some error' is popping up (I have enabled the alert for debugging). I have spent 48 hours trying to change every part of the code wherever possible, but with only little success. What is weird about this is that if I comment the validation for $_POST['email'] field completely, then the validation for $_POST['confirm_email'] field is displaying correctly without any errors. If I enable it back, it is validating email field correctly, but when it reaches the point of validating confirm_email field, it is again showing me the error. I have also tried renaming success variable in check_ajax2.php page to other different names for both $_POST['email'] and $_POST['confirm_email'] but no success. I will be adding more fields in the form and validating within the check_ajax2.php page. So I am not planning on using different ajax pages for validating each of those fields (and I don't think it's smart to do it that way). I am not a jquery or AJAX guru, so all help in resolving this issue is highly appreciated. Thank you in advance.

    Read the article

  • Core Plot: only works ok with three plots

    - by Luis
    I am adding a scatter plot to my app (iGear) so when the user selects one, two or three chainrings combined with a cogset on a bike, lines will show the gears meters. The problem is that Core Plot only shows the plots when three chainrings are selected. I need your help, this is my first try at Core Plot and I'm lost. My code is the following: iGearMainViewController.m - (IBAction)showScatterIpad:(id)sender { cogsetToPass = [NSMutableArray new]; arrayForChainringOne = [NSMutableArray new]; arrayForChainringTwo = [NSMutableArray new]; arrayForChainringThree = [NSMutableArray new]; //behavior according to number of chainrings switch (self.segmentedControl.selectedSegmentIndex) { case 0: // one chainring selected for (int i = 1; i<= [cassette.numCogs intValue]; i++) { if (i <10) { corona = [NSString stringWithFormat:@"cog0%d",i]; }else { corona = [NSString stringWithFormat:@"cog%d",i]; } float one = (wheelSize*[_oneChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; float teeth = [[cassette valueForKey:corona] floatValue]; [cogsetToPass addObject:[NSNumber numberWithFloat:teeth]]; [arrayForChainringOne addObject:[NSNumber numberWithFloat:one]]; } break; case 1: // two chainrings selected for (int i = 1; i<= [cassette.numCogs intValue]; i++) { if (i <10) { corona = [NSString stringWithFormat:@"cog0%d",i]; }else { corona = [NSString stringWithFormat:@"cog%d",i]; } float one = (wheelSize*[_oneChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; //NSLog(@" gearsForOneChainring = %@",[NSNumber numberWithFloat:one]); float two = (wheelSize*[_twoChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; [cogsetToPass addObject:[NSNumber numberWithFloat:[[cassette valueForKey:corona]floatValue]]]; [arrayForChainringOne addObject:[NSNumber numberWithFloat:one]]; [arrayForChainringTwo addObject:[NSNumber numberWithFloat:two]]; } break; case 2: // three chainrings selected for (int i = 1; i<= [cassette.numCogs intValue]; i++) { if (i <10) { corona = [NSString stringWithFormat:@"cog0%d",i]; }else { corona = [NSString stringWithFormat:@"cog%d",i]; } float one = (wheelSize*[_oneChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; float two = (wheelSize*[_twoChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; float three = (wheelSize*[_threeChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; [cogsetToPass addObject:[cassette valueForKey:corona]]; [arrayForChainringOne addObject:[NSNumber numberWithFloat:one]]; [arrayForChainringTwo addObject:[NSNumber numberWithFloat:two]]; [arrayForChainringThree addObject:[NSNumber numberWithFloat:three]]; } default: break; } ScatterIpadViewController *sivc = [[ScatterIpadViewController alloc]initWithNibName: @"ScatterIpadViewController" bundle:nil]; [sivc setModalTransitionStyle:UIModalTransitionStyleFlipHorizontal]; sivc.records = [cassetteNumCogs integerValue]; sivc.cogsetSelected = self.cogsetToPass; sivc.chainringOne = self.arrayForChainringOne; sivc.chainringThree = self.arrayForChainringThree; sivc.chainringTwo = self.arrayForChainringTwo; [self presentViewController:sivc animated:YES completion:nil]; } And the child view with the code to draw the plots: ScatterIpadViewController.m #pragma mark - CPTPlotDataSource methods - (NSUInteger)numberOfRecordsForPlot: (CPTPlot *)plot { return records; } - (NSNumber *)numberForPlot: (CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index{ switch (fieldEnum) { case CPTScatterPlotFieldX: return [NSNumber numberWithInt:index]; break; case CPTScatterPlotFieldY:{ if ([plot.identifier isEqual:@"one"]==YES) { //NSLog(@"chainringOne objectAtIndex:index = %@", [chainringOne objectAtIndex:index]); return [chainringOne objectAtIndex:index]; }else if ([plot.identifier isEqual:@"two"] == YES ){ //NSLog(@"chainringTwo objectAtIndex:index = %@", [chainringTwo objectAtIndex:index]); return [chainringTwo objectAtIndex:index]; }else if ([plot.identifier isEqual:@"three"] == YES){ //NSLog(@"chainringThree objectAtIndex:index = %@", [chainringThree objectAtIndex:index]); return [chainringThree objectAtIndex:index]; } default: break; } } return nil; } The error returned is an exception on trying to access an empty array. 2012-11-15 11:02:42.962 iGearScatter[3283:11603] Terminating app due to uncaught exception 'NSRangeException', reason: ' -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array' First throw call stack: (0x1989012 0x1696e7e 0x192b0b4 0x166cd 0x183f4 0x1bd39 0x179c0 0x194fb 0x199e1 0x43250 0x14b66 0x13ef0 0x13e89 0x3b5753 0x3b5b2f 0x3b5d54 0x3c35c9 0x5c0814 0x392594 0x39221c 0x394563 0x3103b6 0x310554 0x1e87d8 0x27b3014 0x27a37d5 0x192faf5 0x192ef44 0x192ee1b 0x29ea7e3 0x29ea668 0x2d265c 0x22dd 0x2205 0x1)* libc++abi.dylib: terminate called throwing an exception Thank you!

    Read the article

  • Creating a new plugin for mpld3

    - by sjp14051
    Toward learning how to create a new mpld3 plugin, I took an existing example, LinkedDataPlugin (http://mpld3.github.io/examples/heart_path.html), and modified it slightly by deleting references to lines object. That is, I created the following: class DragPlugin(plugins.PluginBase): JAVASCRIPT = r""" mpld3.register_plugin("drag", DragPlugin); DragPlugin.prototype = Object.create(mpld3.Plugin.prototype); DragPlugin.prototype.constructor = DragPlugin; DragPlugin.prototype.requiredProps = ["idpts", "idpatch"]; DragPlugin.prototype.defaultProps = {} function DragPlugin(fig, props){ mpld3.Plugin.call(this, fig, props); }; DragPlugin.prototype.draw = function(){ var patchobj = mpld3.get_element(this.props.idpatch, this.fig); var ptsobj = mpld3.get_element(this.props.idpts, this.fig); var drag = d3.behavior.drag() .origin(function(d) { return {x:ptsobj.ax.x(d[0]), y:ptsobj.ax.y(d[1])}; }) .on("dragstart", dragstarted) .on("drag", dragged) .on("dragend", dragended); patchobj.path.attr("d", patchobj.datafunc(ptsobj.offsets, patchobj.pathcodes)); patchobj.data = ptsobj.offsets; ptsobj.elements() .data(ptsobj.offsets) .style("cursor", "default") .call(drag); function dragstarted(d) { d3.event.sourceEvent.stopPropagation(); d3.select(this).classed("dragging", true); } function dragged(d, i) { d[0] = ptsobj.ax.x.invert(d3.event.x); d[1] = ptsobj.ax.y.invert(d3.event.y); d3.select(this) .attr("transform", "translate(" + [d3.event.x,d3.event.y] + ")"); patchobj.path.attr("d", patchobj.datafunc(ptsobj.offsets, patchobj.pathcodes)); } function dragended(d, i) { d3.select(this).classed("dragging", false); } } mpld3.register_plugin("drag", DragPlugin); """ def __init__(self, points, patch): print "Points ID : ", utils.get_id(points) self.dict_ = {"type": "drag", "idpts": utils.get_id(points), "idpatch": utils.get_id(patch)} However, when I try to link the plugin to a figure, as in plugins.connect(fig, DragPlugin(points[0], patch)) I get an error, 'module' is not callable, pointing to this line. What does this mean and why doesn't it work? Thanks. I'm adding additional code to show that linking more than one Plugin might be problematic. But this may be entirely due to some silly mistake on my part, or there is a way around it. The following code based on LinkedViewPlugin generates three panels, in which the top and the bottom panel are supposed to be identical. Mouseover in the middle panel was expected to control the display in the top and bottom panels, but updates occur in the bottom panel only. It would be nice to be able to figure out how to reflect the changes in multiple panels. Thanks. import matplotlib import matplotlib.pyplot as plt import numpy as np import mpld3 from mpld3 import plugins, utils class LinkedView(plugins.PluginBase): """A simple plugin showing how multiple axes can be linked""" JAVASCRIPT = """ mpld3.register_plugin("linkedview", LinkedViewPlugin); LinkedViewPlugin.prototype = Object.create(mpld3.Plugin.prototype); LinkedViewPlugin.prototype.constructor = LinkedViewPlugin; LinkedViewPlugin.prototype.requiredProps = ["idpts", "idline", "data"]; LinkedViewPlugin.prototype.defaultProps = {} function LinkedViewPlugin(fig, props){ mpld3.Plugin.call(this, fig, props); }; LinkedViewPlugin.prototype.draw = function(){ var pts = mpld3.get_element(this.props.idpts); var line = mpld3.get_element(this.props.idline); var data = this.props.data; function mouseover(d, i){ line.data = data[i]; line.elements().transition() .attr("d", line.datafunc(line.data)) .style("stroke", this.style.fill); } pts.elements().on("mouseover", mouseover); }; """ def __init__(self, points, line, linedata): if isinstance(points, matplotlib.lines.Line2D): suffix = "pts" else: suffix = None self.dict_ = {"type": "linkedview", "idpts": utils.get_id(points, suffix), "idline": utils.get_id(line), "data": linedata} class LinkedView2(plugins.PluginBase): """A simple plugin showing how multiple axes can be linked""" JAVASCRIPT = """ mpld3.register_plugin("linkedview", LinkedViewPlugin2); LinkedViewPlugin2.prototype = Object.create(mpld3.Plugin.prototype); LinkedViewPlugin2.prototype.constructor = LinkedViewPlugin2; LinkedViewPlugin2.prototype.requiredProps = ["idpts", "idline", "data"]; LinkedViewPlugin2.prototype.defaultProps = {} function LinkedViewPlugin2(fig, props){ mpld3.Plugin.call(this, fig, props); }; LinkedViewPlugin2.prototype.draw = function(){ var pts = mpld3.get_element(this.props.idpts); var line = mpld3.get_element(this.props.idline); var data = this.props.data; function mouseover(d, i){ line.data = data[i]; line.elements().transition() .attr("d", line.datafunc(line.data)) .style("stroke", this.style.fill); } pts.elements().on("mouseover", mouseover); }; """ def __init__(self, points, line, linedata): if isinstance(points, matplotlib.lines.Line2D): suffix = "pts" else: suffix = None self.dict_ = {"type": "linkedview", "idpts": utils.get_id(points, suffix), "idline": utils.get_id(line), "data": linedata} fig, ax = plt.subplots(3) # scatter periods and amplitudes np.random.seed(0) P = 0.2 + np.random.random(size=20) A = np.random.random(size=20) x = np.linspace(0, 10, 100) data = np.array([[x, Ai * np.sin(x / Pi)] for (Ai, Pi) in zip(A, P)]) points = ax[1].scatter(P, A, c=P + A, s=200, alpha=0.5) ax[1].set_xlabel('Period') ax[1].set_ylabel('Amplitude') # create the line object lines = ax[0].plot(x, 0 * x, '-w', lw=3, alpha=0.5) ax[0].set_ylim(-1, 1) ax[0].set_title("Hover over points to see lines") linedata = data.transpose(0, 2, 1).tolist() plugins.connect(fig, LinkedView(points, lines[0], linedata)) # second set of lines exactly the same but in a different panel lines2 = ax[2].plot(x, 0 * x, '-w', lw=3, alpha=0.5) ax[2].set_ylim(-1, 1) ax[2].set_title("Hover over points to see lines #2") plugins.connect(fig, LinkedView2(points, lines2[0], linedata)) mpld3.show()

    Read the article

  • Application crashing when talking to oracle unless executable path contains spaces

    - by Lasse V. Karlsen
    We have an x-files problem with our .NET application. Or, rather, hybrid Win32 and .NET application. When it attempts to communicate with Oracle, it just dies. Vanishes. Goes to the big black void in the sky. No event log message, no exception, no nothing. If we simply ask the application to talk to a MS SQL Server instead, which has the effect of replacing the usage of OracleConnection and related classes with SqlConnection and related classes, it works as expected. Today we had a breakthrough. For some reason, a customer had figured out that by placing all the application files in a directory on his desktop, it worked as expected with Oracle as well. Moving the directory down to the root of the drive, or in C:\Temp or, well, around a bit, made the crash reappear. Basically it was 100% reproducable that the application worked if run from directory on desktop, and failed if run from directory in root. Today we figured out that the difference that counted was wether there was a space in the directory name or not. So, these directories would work: C:\Program Files\AppDir\Executable.exe C:\Temp Lemp\AppDir\Executable.exe C:\Documents and Settings\someuser\Desktop\AppDir\Executable.exe whereas these would not: C:\CompanyName\AppDir\Executable.exe C:\Programfiler\AppDir\Executable.exe <-- Program Files in norwegian C:\Temp\AppDir\Executable.exe I'm hoping someone reading this has seen similar behavior and have a "aha, you need to twiddle the frob on the oracle glitz driver configuration" or similar. Anyone? Followup #1: Ok, I've processed the procmon output now, both files from when I hit the button that attempts to open the window that triggers the cascade failure, and I've noticed that they keep track mostly, there's some smallish differences near the top of both files, and they they keep track a long way down. However, when one run fails, the other keeps going and the next few lines of the log output are these: ReadFile C:\oracle\product\10.2.0\db_1\BIN\orageneric10.dll SUCCESS Offset: 274 432, Length: 32 768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O ReadFile C:\oracle\product\10.2.0\db_1\BIN\orageneric10.dll SUCCESS Offset: 233 472, Length: 32 768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O After this, the working run continues to execute, and the other touches the mscorwks.dll files a few times before threads close down and the app closes. Thus, the failed run does not touch the above files. Followup #2: Figured I'd try to upgrade the oracle client drivers, but 10.2.0.1 is apparently the highest version available for Windows 2003 server and XP clients. Followup #3: Well, we've ended up with a black-box solution. Basically we found that the problem is somewhere related to XPO and Oracle. XPO has a system-table it manages, called XPObjectType, with three columns: Oid, TypeName and AssemblyName. Due to how Oracle is configured in the databases we talk to, the column names were OID, TYPENAME and ASSEMBLYNAME. This would ordinarily not be a problem, except that XPO talks to the schema information directly and checks if the table is there with the right column names, and XPO doesn't handle case differences so it sees a XPObjectType table with three unknown columns and none of those it expects. Exactly what XPO does now I don't really know, but if I dropped this table, and recreated it with the right case, using double quotes around all the column names to get the case right, the problem doesn't crop up. Exactly where the space in the folder name comes into this, I still have no idea, but this problem had two tiers: Stop the application from crashing at our customers, short-term solution Fix the bug, long-term solution Right now tier 1 is solved, tier 2 will be put back into the queue for now and prioritized. We're facing some bigger changes to our data tier anyway so this might not be a problem we need to solve, at least if all our Oracle-customers verify that the table-fix actually gets rid of the problem. I'll accept the answer by Dave Markle since though Process Monitor (the big brother of File Monitor) didn't actually pinpoint the problem, I was able to use it to determine that after my breakpoint in user-code where XPO had built up the query for this table, no I/O happened until all the entries for the application closing down was logged, which led me to believe it was this table that was the culprit, or at least influenced the problem somehow. If I manage to get to the real cause of this, I'll update the post.

    Read the article

  • directX texture appears incorrectly

    - by numerical25
    I finally managed to get a texture onto a cube sadly, but it is appearing incorrectly. as the below picture identifies. Anyways, I am not sure what it could be. My first guess is it could be my uv mapping or my vertex positioning is off. If someone could check and make sure thats good. The first element is the vertex position, second is the color, and third is the uv texture. //Create vectors and put in vertices // Create vertex buffer VertexPos vertices[] = { // BACK SIDES { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(1.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(1.0,1.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(1.0,1.0)}, // 2 FRONT SIDE { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f) , D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(1.0,1.0)}, // 3 { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,2.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, // 4 { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, // 5 { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,2.0)}, // 6 {D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, }; My second guess could be an error that I am receiving as I run the program. But I don't know where to begin with that. The following is the description of the error . D3D10: WARNING: ID3D10Device::Draw: Vertex Buffer at the input vertex slot 0 is not big enough for what the Draw*() call expects to traverse. This is OK, as reading off the end of the Buffer is defined to return 0. However the developer probably did not intend to make use of this behavior. [ EXECUTION WARNING #356: DEVICE_DRAW_VERTEX_BUFFER_TOO_SMALL ] Not sure what it could be. but where is my vertex layout description //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 28, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"TEXCOORD",0, DXGI_FORMAT_R32G32_FLOAT, 0 , 44, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); for(int i = 0; i < modelObject.numVertices; i += 3) { D3DXVECTOR3 out; D3DXVECTOR3 v1 = vertices[0 + i].pos; D3DXVECTOR3 v2 = vertices[1 + i].pos; D3DXVECTOR3 v3 = vertices[2 + i].pos; D3DXVECTOR3 u = v2 - v1; D3DXVECTOR3 v = v3 - v1; D3DXVec3Cross(&out, &u, &v); D3DXVec3Normalize(&out, &out); vertices[0 + i].normal = out; vertices[1 + i].normal = out; vertices[2 + i].normal = out; } //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false;

    Read the article

  • Many-to-one relation exception due to closed session after loading

    - by Nick Thissen
    Hi, I am using NHibernate (version 1.2.1) for the first time so I wrote a simple test application (an ASP.NET project) that uses it. In my database I have two tables: Persons and Categories. Each person gets one category, seems easy enough. | Persons | | Categories | |--------------| |--------------| | Id (PK) | | Id (PK) | | Firstname | | CategoryName | | Lastname | | CreatedTime | | CategoryId | | UpdatedTime | | CreatedTime | | Deleted | | UpdatedTime | | Deleted | The Id, CreatedTime, UpdatedTime and Deleted attributes are a convention I use in all my tables, so I have tried to bring this fact into an additional abstraction layer. I have a project DatabaseFramework which has three important classes: Entity: an abstract class that defines these four properties. All 'entity objects' (in this case Person and Category) must inherit Entity. IEntityManager: a generic interface (type parameter as Entity) that defines methods like Load, Insert, Update, etc. NHibernateEntityManager: an implementation of this interface using NHibernate to do the loading, saving, etc. Now, the Person and Category classes are straightforward, they just define the attributes of the tables of course (keeping in mind that four of them are in the base Entity class). Since the Persons table is related to the Categories table via the CategoryId attribute, the Person class has a Category property that holds the related category. However, in my webpage, I will also need the name of this category (CategoryName), for databinding purposes for example. So I created an additional property CategoryName that returns the CategoryName property of the current Category property, or an empty string if the Category is null: Namespace Database Public Class Person Inherits DatabaseFramework.Entity Public Overridable Property Firstname As String Public Overridable Property Lastname As String Public Overridable Property Category As Category Public Overridable ReadOnly Property CategoryName As String Get Return If(Me.Category Is Nothing, _ String.Empty, _ Me.Category.CategoryName) End Get End Property End Class End Namespace I am mapping the Person class using this mapping file. The many-to-one relation was suggested by Yads in another thread: <id name="Id" column="Id" type="int" unsaved-value="0"> <generator class="identity" /> </id> <property name="CreatedTime" type="DateTime" not-null="true" /> <property name="UpdatedTime" type="DateTime" not-null="true" /> <property name="Deleted" type="Boolean" not-null="true" /> <property name="Firstname" type="String" /> <property name="Lastname" type="String" /> <many-to-one name="Category" column="CategoryId" class="NHibernateWebTest.Database.Category, NHibernateWebTest" /> (I can't get it to show the root node, this forum hides it, I don't know how to escape the html-like tags...) The final important detail is the Load method of the NHibernateEntityManager implementation. (This is in C# as it's in a different project, sorry about that). I simply open a new ISession (ISessionFactory.OpenSession) in the GetSession method and then use that to fill an EntityCollection(Of TEntity) which is just a collection inheriting System.Collections.ObjectModel.Collection(Of T). public virtual EntityCollection< TEntity Load() { using (ISession session = this.GetSession()) { var entities = session .CreateCriteria(typeof (TEntity)) .Add(Expression.Eq("Deleted", false)) .List< TEntity (); return new EntityCollection< TEntity (entities); } } (Again, I can't get it to format the code correctly, it hides the generic type parameters, probably because it reads the angled symbols as a HTML tag..? If you know how to let me do that, let me know!) Now, the idea of this Load method is that I get a fully functional collection of Persons, all their properties set to the correct values (including the Category property, and thus, the CategoryName property should return the correct name). However, it seems that is not the case. When I try to data-bind the result of this Load method to a GridView in ASP.NET, it tells me this: Property accessor 'CategoryName' on object 'NHibernateWebTest.Database.Person' threw the following exception:'Could not initialize proxy - the owning Session was closed.' The exception occurs on the DataBind method call here: public virtual void LoadGrid() { if (this.Grid == null) return; this.Grid.DataSource = this.Manager.Load(); this.Grid.DataBind(); } Well, of course the session is closed, I closed it via the using block. Isn't that the correct approach, should I keep the session open? And for how long? Can I close it after the DataBind method has been run? In each case, I'd really like my Load method to just return a functional collection of items. It seems to me that it is now only getting the Category when it is required (eg, when the GridView wants to read the CategoryName, which wants to read the Category property), but at that time the session is closed. Is that reasoning correct? How do I stop this behavior? Or shouldn't I? And what should I do otherwise? Thanks!

    Read the article

  • Windows Phone 7: Making ListBox items change dynamically

    - by Chad La Guardia
    I am working on creating a Windows Phone app that will play a series of sound clips selected from a list. I am using the MVVM (Model View View-Model) Design pattern and have designed a model for my data, along with a view model for my page. Here is what the XAML for the ListBox looks like: <ListBox x:Name="MediaListBox" Margin="0,0,-12,0" ItemsSource="{Binding Media}" SelectionChanged="MediaListBox_SelectionChanged" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch"> <ListBox.ItemTemplate > <DataTemplate> <StackPanel Margin="0,0,0,17" Width="432" Orientation="Horizontal"> <Image Source="../Media/Images/play.png" /> <StackPanel > <TextBlock Text="{Binding Title}" TextWrapping="Wrap" Style="{StaticResource PhoneTextExtraLargeStyle}"/> <TextBlock Text="{Binding ShortDescription}" TextWrapping="Wrap" Margin="12,-6,12,0" Visibility="{Binding ShortDescriptionVisibility}" Style="{StaticResource PhoneTextSubtleStyle}"/> <TextBlock Text="{Binding LongDescription}" TextWrapping="Wrap" Visibility="{Binding LongDescriptionVisibility}" /> <StackPanel> <Slider HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" Visibility="{Binding LongDescriptionVisibility}" ValueChanged="Slider_ValueChanged" LargeChange="0.25" SmallChange="0.05" /> </StackPanel> </StackPanel> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> My question is this: I want to be able to expand and collapse part of the items in the ListBox. As you can see, I have a binding for the visibility. That binding is coming from the MediaModel. However, when I change this property in the ObservableCollection, the page is not updated to reflect this. The ViewModel for this page looks like this: public class ListenPageViewModel : INotifyPropertyChanged { public ListenPageViewModel() { this.Media = new ObservableCollection<MediaModel>; } /// <summary> /// A collection for MediaModel objects. /// </summary> public ObservableCollection<MediaModel> Media { get; private set; } public bool IsDataLoaded { get; private set; } /// <summary> /// Creates and adds the media to their respective collections. /// </summary> public void LoadData() { this.Media.Clear(); this.Media.Add(new MediaModel() { Title = "Media 1", ShortDescription = "Short here.", LongDescription = "Long here.", MediaSource = "/Media/test.mp3", LongDescriptionVisibility = Visibility.Collapsed, ShortDescriptionVisibility = Visibility.Visible }); this.Media.Add(new MediaModel() { Title = "Media 2", ShortDescription = "Short here.", LongDescription = "Long here.", MediaSource = "/Media/test2.mp3", LongDescriptionVisibility = Visibility.Collapsed, ShortDescriptionVisibility = Visibility.Visible }); this.IsDataLoaded = true; } public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(String propertyName) { PropertyChangedEventHandler handler = PropertyChanged; if (null != handler) { handler(this, new PropertyChangedEventArgs(propertyName)); } } } The bindings work correctly and I am seeing the data displayed; however, when I change the properties, the list does not update. I believe that this may be because when I change things inside the observable collection, the property changed event is not firing. What can I do to remedy this? I have poked around for some info on this, but many of the tutorials don't cover this kind of behavior. Any help would be greatly appreciated! Thanks Edit: As requested, I have added the MediaModel code: public class MediaModel : INotifyPropertyChanged { public string Title { get; set; } public string ShortDescription { get; set; } public string LongDescription { get; set; } public string MediaSource { get; set; } public Visibility LongDescriptionVisibility { get; set; } public Visibility ShortDescriptionVisibility { get; set; } public MediaModel() { } public MediaModel(string Title, string ShortDescription, string LongDescription, string MediaSource, Visibility LongDescriptionVisibility, Visibility ShortDescriptionVisibility) { this.Title = Title; this.ShortDescription = ShortDescription; this.LongDescription = LongDescription; this.MediaSource = MediaSource; this.LongDescriptionVisibility = LongDescriptionVisibility; this.ShortDescriptionVisibility = ShortDescriptionVisibility; } public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(String propertyName) { PropertyChangedEventHandler handler = PropertyChanged; if (null != handler) { handler(this, new PropertyChangedEventArgs(propertyName)); } } } Originally, I did not have this class implement the INotifyPropertyChanged. I did this to see if it would solve the problem. I was hoping this could just be a data object.

    Read the article

  • high tweet status IDs causing failed to open stream errors?

    - by escarp
    Erg. Starting in the past few days high tweet IDs (at least, it appears it's ID related, but I suppose it could be some recent change in the api returns) are breaking my code. At first I tried passing the ID as a string instead of an integer to this function, and I thought this worked, but in reality it was just the process of uploading the file from my end. In short, a php script generates these function calls, and when it does so, they fail. If I download the php file the call is generated into, delete the server copy and re-upload the exact same file without changing it, it works fine. Does anyone know what could be causing this behavior? Below is what I suspect to be the most important part of the individual files that are pulling the errors. Each of the files is named for a status ID (e.g. the below file is named 12058543656.php) <?php require "singlePost.php"; SinglePost(12058543656) ?> Here's the code that writes the above files: $postFileName = $single_post_id.".php"; if(!file_exists($postFileName)){ $created_at_full = date("l, F jS, Y", strtotime($postRow[postdate])-(18000)); $postFileHandle = fopen($postFileName, 'w+'); fwrite($postFileHandle, '<html> <head> <title><?php $thisTITLE = "escarp | A brief poem or short story by '.$authorname.' on '.$created_at_full.'"; echo $thisTITLE;?></title><META NAME="Description" CONTENT="This brief poem or short story, by '.$authorname.', was published on '.$created_at_full.'"> <?php include("head.php");?> To receive other poems or short stories like this one from <a href=http://twitter.com/escarp>escarp</a> on your cellphone, <a href=http://twitter.com/signup>create</a> and/or <a href=http://twitter.com/devices>associate</a> a Twitter account with your cellphone</a>, follow <a href=http://twitter.com/escarp>us</a>, and turn device updates on. <pre><?php require "singlePost.php"; SinglePost("'.$single_post_id.'") ?> </div></div></pre><?php include("foot.php");?> </body> </html>'); fclose($postFileHandle);} $postcounter++; } I can post more if you don't see anything here, but there are several files involved and I'm trying to avoid dumping tons of irrelevant code. Error: Warning: include(head.php) [function.include]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 4 Warning: include(head.php) [function.include]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 4 Warning: include() [function.include]: Failed opening 'head.php' for inclusion (include_path='.:/nfsn/apps/php5/lib/php/:/nfsn/apps/php/lib/php/') in /f2/escarp/public/12177797583.php on line 4 To receive other poems or short stories like this one from escarp on your cellphone, create and/or associate a Twitter account with your cellphone, follow us, and turn device updates on. Warning: require(singlePost.php) [function.require]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 7 Warning: require(singlePost.php) [function.require]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 7 Fatal error: require() [function.require]: Failed opening required 'singlePost.php' (include_path='.:/nfsn/apps/php5/lib/php/:/nfsn/apps/php/lib/php/') in /f2/escarp/public/12177797583.php on line 7 <?php function SinglePost($statusID) { require "nicetime.php"; $db = sqlite_open("db.escarp"); $updates = sqlite_query($db, "SELECT * FROM posts WHERE postID = '$statusID'"); $row = sqlite_fetch_array($updates, SQLITE_ASSOC); $id = $row[authorID]; $result = sqlite_query($db, "SELECT * FROM authors WHERE authorID = '$id'"); $row5 = sqlite_fetch_array($result, SQLITE_ASSOC); $created_at_full = date("l, F jS, Y", strtotime($row[postdate])-(18000)); $created_at = nicetime($row[postdate]); if($row5[url]==""){ $authorurl = ''; } else{ /*I'm omitting a few pages of output code and associated regex*/ return; } ?>

    Read the article

  • Apache2 mod_proxy to remote Tomcat7 - slow response

    - by 12N
    Been stuck with this one for a few days. Will try to provide as much information as possible, but please feel free to ask for extra detail. I have 2 VMs behind a NAT, 192.168.0.100 and 192.168.0.102, both running Ubuntu 11.04 x64. The first one is mapped to the exterior and is our webserver, has one Apache/2.2.17 install with several vhosts to serve static content, and there's also mod_jk for load balancing. The second one has a tomcat 7 install with several J2EE REST webservices but no apache - requests are expected to be passed directly from .100 apache to .102 tomcat. It is my intention to prepare a tomcat clustered environment. My problem: Requests reach to 192.168.0.100 with no trouble whatsoever, but then take about... 100 seconds for data to actually arrive to .102 - by that time apache has already timeouted, but tomcat receives and processes the request pretty normally. This happens both when using mod_jk, mod_proxy, or mod_ajp_proxy. No idea why, since there are no firewalls in either of the machines, both are pingable - more than that, there are NFS shares active working like a charm - and a mod_proxy experience shown that requests originating directly from .100 are processed normally. Also, to add insult to injury, a similar environment is set up at our office network. Everything works perfectly. -_- The only difference? We have no ip translation at the office and do everything by internal addresses - dunno if that's relevant in any way. Some configs: Apache vhost: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ ServerName www.example.com ProxyRequests Off <Proxy *> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Proxy> ProxyPass /bork http://192.168.0.102:8080/bork ProxyPassReverse /bork http://192.168.0.102:8080/bork LogLevel debug CustomLog ${APACHE_LOG_DIR}/api_access.log combined ErrorLog ${APACHE_LOG_DIR}/api_error.log </VirtualHost> Tomcat connectors <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" /> And a debug log from apache, from a test using mod_proxy_ajp. The behavior is pretty much the same in mod_proxy, at least regarding the delay. Please note that tomcat eventually receives and processes the request, more or less when the log starts being updated again: [Sun May 06 14:40:33 2012] [debug] proxy_util.c(1506): [client 188.81.234.2] proxy: ajp: found worker ajp://192.168.0.102:8008/bork for ajp://192.168.0.102:8008/bork/SSOIdentityProviderSoap [Sun May 06 14:40:33 2012] [debug] mod_proxy.c(1015): Running scheme ajp handler (attempt 0) [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(661): proxy: AJP: serving URL ajp://192.168.0.102:8008/bork/SSOIdentityProviderSoap [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2011): proxy: AJP: has acquired connection for (192.168.0.102) [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2067): proxy: connecting ajp://192.168.0.102:8008/bork/SSOIdentityProviderSoap to 192.168.0.102:8008 [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2193): proxy: connected /bork/SSOIdentityProviderSoap to 192.168.0.102:8008 [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2444): proxy: AJP: fam 2 socket created to connect to 192.168.0.102 [Sun May 06 14:40:33 2012] [debug] ajp_header.c(224): Into ajp_marshal_into_msgb [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[0] [Accept-Encoding] = [gzip,deflate] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[1] [Content-Type] = [text/xml;charset=UTF-8] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[2] [SOAPAction] = [""] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[3] [User-Agent] = [Jakarta Commons-HttpClient/3.1] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[4] [Host] = [www.example.com] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[5] [Content-Length] = [520] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(450): ajp_marshal_into_msgb: Done [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(267): proxy: APR_BUCKET_IS_EOS [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(272): proxy: data to read (max 8186 at 4) [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(287): proxy: got 520 bytes of data [Sun May 06 14:40:33 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 06 [Sun May 06 14:40:33 2012] [debug] ajp_header.c(697): ajp_parse_type: got 06 [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 2 in child 5916 for worker ajp://192.168.0.100:8008/coding [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.100:8008/coding already initialized [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 2 in child 5916 for (192.168.0.100) [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 3 in child 5916 for worker http://192.168.0.102:8080 [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1837): proxy: worker http://192.168.0.102:8080 already initialized [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 3 in child 5916 for (192.168.0.102) [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 4 in child 5916 for worker ajp://192.168.0.102:8008/bork [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.102:8008/bork already initialized [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 4 in child 5916 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 2 in child 5918 for (192.168.0.100) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 3 in child 5918 for worker http://192.168.0.102:8080 [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker http://192.168.0.102:8080 already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 3 in child 5918 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 4 in child 5918 for worker ajp://192.168.0.102:8008/bork [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.102:8008/bork already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 4 in child 5918 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 2 in child 5917 for worker ajp://192.168.0.100:8008/coding [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.100:8008/coding already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 2 in child 5917 for (192.168.0.100) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 3 in child 5917 for worker http://192.168.0.102:8080 [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker http://192.168.0.102:8080 already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 3 in child 5917 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 4 in child 5917 for worker ajp://192.168.0.102:8008/bork [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.102:8008/bork already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 4 in child 5917 for (192.168.0.102) [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 04 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 04 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(516): ajp_unmarshal_response: status = 200 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(537): ajp_unmarshal_response: Number of headers is = 1 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(599): ajp_unmarshal_response: Header[0] [Content-Type] = [text/xml;charset=utf-8] [Sun May 06 14:42:09 2012] [debug] ajp_header.c(609): ajp_unmarshal_response: ap_set_content_type done [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 05 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 05 [Sun May 06 14:42:09 2012] [debug] mod_deflate.c(615): [client 188.81.234.2] Zlib: Compressed 447 to 255 : URL /bork/SSOIdentityProviderSoap [Sun May 06 14:42:09 2012] [debug] mod_proxy_ajp.c(570): proxy: got response from (null) (192.168.0.102) [Sun May 06 14:42:09 2012] [debug] proxy_util.c(2029): proxy: AJP: has released connection for (192.168.0.102) [Sun May 06 14:42:09 2012] [info] [client 188.81.234.2] Request body read timeout Was wondering if any one could provide some advice, perhaps even point out any hideous, horrible configuration error? thanks in advance!

    Read the article

  • How to get Passive FTP Working Through an Iptables Firewall?

    - by user1133248
    I have an iptables firewall running on a Fedora Linux server that is basically being used as a firewall router and OpenVPN server. That's it. We have been using the same iptables firewall code for YEARS. I did make some changes on 21 December to re-route a mySQL port, but given what has happened I've completely backed those changes out. Sometime after those changes were made and backed out passive FTP, served from a vsftpd process, stopped working. We use a passive ftp client to FLING (that's the name of the ftp client running under Windows! :-) ) images from our remote telescopes to our server. I believe it is something in the firewall code because I can drop the firewall and the FTP file transfer (and connecting to the ftp site with Internet Explorer to see the file list) works. When I raise the iptables firewall, it stops working. Again, this is code that we'd been using for years. However, I felt that maybe there was something I missed, so we had a .bak file from 2009 that I used. Same behavior, passive ftp does not work. So, I went and rebuilt the firewall code line by line to see what line was causing the problem. Everything worked until I put the line -A FORWARD -j DROP in very near the end. Of course, if I am correct, this is the line that basically "turns on" the firewall, saying drop everything except for the exceptions I've made above. However, this line has been in the iptables code probably since 2003. So, I'm at the end of my rope, and I still can't figure out why this has stopped working. I guess I need an expert on iptables configuration. Here is the iptables code (from iptables-save) with comments. # Generated by iptables-save v1.3.8 on Thu Jan 5 18:36:25 2012 *nat # One of the things that I remain ignorant about is what these following three lines # do in both the nat tables (which we're not using on this machine) and the following # filter table. I don't know what the numbers are, but I'm ASSUMING they're port # ranges. # :PREROUTING ACCEPT [7435:551429] :POSTROUTING ACCEPT [6097:354458] :OUTPUT ACCEPT [5:451] COMMIT # Completed on Thu Jan 5 18:36:25 2012 # Generated by iptables-save v1.3.8 on Thu Jan 5 18:36:25 2012 *filter :INPUT ACCEPT [10423:1046501] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [15184:16948770] # The following line is for my OpenVPN configuration. -A INPUT -i tun+ -j ACCEPT # In researching this on the Internet I found some iptables code that was supposed to # open the needed ports up. I never needed this before this week, but since passive FTP # was no longer working, I decided to put the code in. The next three lines are part of # that code. -A INPUT -p tcp -m tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --sport 1024:65535 --dport 20 -m state --state ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state RELATED,ESTABLISHED -j ACCEPT # Another line for the OpenVPN configuration. I don't know why the iptables-save mixed # the lines up. -A FORWARD -i tun+ -j ACCEPT # Various forwards for all our services -A FORWARD -s 65.118.148.197 -p tcp -m tcp --dport 3307 -j ACCEPT -A FORWARD -d 65.118.148.197 -p tcp -m tcp --dport 3307 -j ACCEPT -A FORWARD -s 65.118.148.197 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -d 65.118.148.197 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 21 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 21 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 20 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 20 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 7191 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 7191 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 46000:46999 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 46000:46999 -j ACCEPT -A FORWARD -s 65.118.148.0/255.255.255.0 -j ACCEPT -A FORWARD -d 65.118.148.196 -p udp -m udp --dport 53 -j ACCEPT -A FORWARD -s 65.118.148.196 -p udp -m udp --dport 53 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 53 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 53 -j ACCEPT -A FORWARD -d 65.118.148.196 -p udp -m udp --dport 25 -j ACCEPT -A FORWARD -s 65.118.148.196 -p udp -m udp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 42 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 42 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -d 65.118.148.204 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -s 65.118.148.204 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 6667 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 6667 -j ACCEPT -A FORWARD -s 65.96.214.242 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -s 192.68.148.66 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT # "The line" that causes passive ftp to stop working. Insofar as I can tell, everything # else seems to work - ssh, telnet, mysql, httpd. -A FORWARD -j DROP -A FORWARD -p icmp -j ACCEPT # The following code is again part of my attempt to put in code that would cause passive # ftp to work. I don't know why iptables-save scattered it about like this. -A OUTPUT -p tcp -m tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 20 --dport 1024:65535 -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT COMMIT # Completed on Thu Jan 5 18:36:25 2012 So, with all that prelude, my basic question is: How can I get passive ftp to work behind an iptables firewall? As you can see, I've tried to get it working (again) and tried to do some research on the issue, but have come up...short. Any answers would be appreciated by both me and various variable star astronomers around the world! THANKS! -Richard "Doc" Kinne, American Assoc. of Variable Star Observers, [email protected]

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254  | Next Page >