Search Results

Search found 6826 results on 274 pages for 'dedicated hosting'.

Page 267/274 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Core Plot never stops asking for data, hangs device

    - by Ben Collins
    I'm trying to set up a core plot that looks somewhat like the AAPLot example on the core-plot wiki. I have set up my plot like this: - (void)initGraph:(CPXYGraph*)graph forDays:(NSUInteger)numDays { self.cplhView.hostedLayer = graph; graph.paddingLeft = 30.0; graph.paddingTop = 20.0; graph.paddingRight = 30.0; graph.paddingBottom = 20.0; CPXYPlotSpace *plotSpace = (CPXYPlotSpace*)graph.defaultPlotSpace; plotSpace.xRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(numDays)]; plotSpace.yRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(1)]; CPLineStyle *lineStyle = [CPLineStyle lineStyle]; lineStyle.lineColor = [CPColor blackColor]; lineStyle.lineWidth = 2.0f; // Axes NSLog(@"Setting up axes"); CPXYAxisSet *xyAxisSet = (id)graph.axisSet; CPXYAxis *xAxis = xyAxisSet.xAxis; // xAxis.majorIntervalLength = CPDecimalFromFloat(7); // xAxis.minorTicksPerInterval = 7; CPXYAxis *yAxis = xyAxisSet.yAxis; // yAxis.majorIntervalLength = CPDecimalFromFloat(0.1); // Line plot with gradient fill NSLog(@"Setting up line plot"); CPScatterPlot *dataSourceLinePlot = [[[CPScatterPlot alloc] initWithFrame:graph.bounds] autorelease]; dataSourceLinePlot.identifier = @"Data Source Plot"; dataSourceLinePlot.dataLineStyle = nil; dataSourceLinePlot.dataSource = self; [graph addPlot:dataSourceLinePlot]; CPColor *areaColor = [CPColor colorWithComponentRed:1.0 green:1.0 blue:1.0 alpha:0.6]; CPGradient *areaGradient = [CPGradient gradientWithBeginningColor:areaColor endingColor:[CPColor clearColor]]; areaGradient.angle = -90.0f; CPFill *areaGradientFill = [CPFill fillWithGradient:areaGradient]; dataSourceLinePlot.areaFill = areaGradientFill; dataSourceLinePlot.areaBaseValue = CPDecimalFromString(@"320.0"); // OHLC plot NSLog(@"OHLC Plot"); CPLineStyle *whiteLineStyle = [CPLineStyle lineStyle]; whiteLineStyle.lineColor = [CPColor whiteColor]; whiteLineStyle.lineWidth = 1.0f; CPTradingRangePlot *ohlcPlot = [[[CPTradingRangePlot alloc] initWithFrame:graph.bounds] autorelease]; ohlcPlot.identifier = @"OHLC"; ohlcPlot.lineStyle = whiteLineStyle; ohlcPlot.stickLength = 2.0f; ohlcPlot.plotStyle = CPTradingRangePlotStyleOHLC; ohlcPlot.dataSource = self; NSLog(@"Data source set, adding plot"); [graph addPlot:ohlcPlot]; } And my delegate methods like this: #pragma mark - #pragma mark CPPlotdataSource Methods - (NSUInteger)numberOfRecordsForPlot:(CPPlot *)plot { NSUInteger maxCount = 0; NSLog(@"Getting number of records."); if (self.data1 && [self.data1 count] > maxCount) { maxCount = [self.data1 count]; } if (self.data2 && [self.data2 count] > maxCount) { maxCount = [self.data2 count]; } if (self.data3 && [self.data3 count] > maxCount) { maxCount = [self.data3 count]; } NSLog(@"%u records", maxCount); return maxCount; } - (NSNumber *)numberForPlot:(CPPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index { NSLog(@"Getting record @ idx %u", index); return [NSNumber numberWithInt:index]; } All the code above is in the view controller for the view hosting the plot, and when initGraph is called, numDays is 30. I realize of course that this plot, if it even worked, would look nothing like the AAPLot example. The problem I'm having is that the view is never shown. It finished loading because viewDidLoad is the method that calls initGraph above, and the NSLog statements indicate that initGraph finishes. What's strange is that I return a value of 54 from numberOfRecordsForPlot, but the plot asks for more than 54 data points. in fact, it never stops asking. The NSLog statement in numberForPlot:field:recordIndex prints forever, going from 0 to 54 and then looping back around and continuing. What's going on? Why won't the plot stop asking for data and draw itself?

    Read the article

  • Clickonce installation fails after addition of WCF service project

    - by Ant
    So I have a winform solution, deployed via clickonce. Eveything worked fine until i added a WCF project. (see error in parsing the manifest file at end of post) Now I notice that MSBuild compiles the service into a _PublishedWebsites dir. I don't know what the need for this is, but I am suspecting this is the cause of the problem. This wcf project references some other projects within the solution. I am actually hosting the wcf service within the application so I don't really need MSBuild to do all this for me. Any ideas? ===================================================================================== PLATFORM VERSION INFO Windows : 5.1.2600.131072 (Win32NT) Common Language Runtime : 2.0.50727.3603 System.Deployment.dll : 2.0.50727.3053 (netfxsp.050727-3000) mscorwks.dll : 2.0.50727.3603 (GDR.050727-3600) dfdll.dll : 2.0.50727.3053 (netfxsp.050727-3000) dfshim.dll : 2.0.50727.3053 (netfxsp.050727-3000) SOURCES Deployment url : file:///C:/applications/abc/dev/abc.Application.application IDENTITIES Deployment Identity : Flow Management System.app, Version=1.4.0.0, Culture=neutral, PublicKeyToken=8453086392175e0f, processorArchitecture=msil APPLICATION SUMMARY * Installable application. * Trust url parameter is set. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of C:\applications\abc\dev\abc.Application.application resulted in exception. Following failure messages were detected: + Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. + Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: + Exception from HRESULT: 0x80070C81 COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [12/03/2010 6:33:53 PM] : Activation of C:\applications\abc\dev\abc.Application.application has started. * [12/03/2010 6:33:53 PM] : Processing of deployment manifest has successfully completed. * [12/03/2010 6:33:53 PM] : Installation of the application has started. ERROR DETAILS Following errors were detected during this operation. * [12/03/2010 6:33:53 PM] System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) at System.Deployment.Application.DownloadManager.DownloadManifest(Uri& sourceUri, String targetPath, IDownloadNotification notification, DownloadOptions options, ManifestType manifestType, ServerInformation& serverInformation) at System.Deployment.Application.DownloadManager.DownloadApplicationManifest(AssemblyManifest deploymentManifest, String targetDir, Uri deploymentUri, IDownloadNotification notification, DownloadOptions options, Uri& appSourceUri, String& appManifestPath) at System.Deployment.Application.ApplicationActivator.DownloadApplication(SubscriptionState subState, ActivationDescription actDesc, Int64 transactionId, TempDirectory& downloadTemp) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) --- Inner Exception --- System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) at System.Deployment.Application.Manifest.AssemblyManifest..ctor(FileStream fileStream) at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) --- Inner Exception --- System.Runtime.InteropServices.COMException - Exception from HRESULT: 0x80070C81 - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IsolationInterop.CreateCMSFromXml(Byte[] buffer, UInt32 bufferSize, IManifestParseErrorCallback Callback, Guid& riid) at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) COMPONENT STORE TRANSACTION DETAILS No transaction information is available.

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • Problem deploying servlets in JBoss 5.1 on eclipse.

    - by Jeremy Goodell
    Yesterday I created a simple image servlet and attempted to deploy it. I am getting an error on JBoss startup, and then further errors on trying to invoke the servlet. I spent about 8 hours yesterday searching the web for answers and trying different scenarios. I ended up making my JBoss problems worse and then fixing them, but I never did get the servlet to work. The servlet is com.controller.MyImageServlet, and looks like this: package com.controller; import javax.servlet.http.*; /** * Servlet implementation class MyImageServlet */ public class MyImageServlet extends HttpServlet { private static final long serialVersionUID = 1L; public MyImageServlet() { super(); } // Process the HTTP Get request public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { /* ... */ } } The tags I've added to web.xml looks like this: <servlet> <servlet-name>ImageServlet</servlet-name> <servlet-class>com.controller.MyImageServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ImageServlet</servlet-name> <url-pattern>/CERTIMAGE/*</url-pattern> </servlet-mapping> On server startup, this is the only indication in the log that something is amiss: 01:59:25,328 WARN [JAXWSDeployerHookPreJSE] Cannot load servlet class: com.controller.MyImageServlet When I try the URL pattern (http://localhost:9980/CERTIMAGE/1), I get the following stack trace in the log (and in the Browser): 01:59:39,640 INFO [[/]] Marking servlet ImageServlet as unavailable 01:59:39,640 ERROR [[ImageServlet]] Allocate exception for servlet ImageServlet java.lang.ClassNotFoundException: com.controller.MyImageServlet at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at org.jboss.web.tomcat.service.TomcatInjectionContainer.newInstance(TomcatInjectionContainer.java:262) at org.jboss.web.tomcat.service.TomcatInjectionContainer.newInstance(TomcatInjectionContainer.java:256) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1006) at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:777) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:129) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) If I try the URL pattern again, I get the following message in the log: 09:33:32,390 INFO [[ImageServlet]] Servlet ImageServlet is currently unavailable I have verified that MyImageServlet.class is in the WAR at WEB-INF/classes/com/controller/images. As a matter of fact, I even added some code to one of my JSPs to attempt to instantiate the Servlet and call the doGet method. This actually works and outputs the correct debug sequences to the log indicating that the Servlet constructor and doGet methods were called. I also tried following some instructions for creating/deploying a very simple HelloWorld servlet, and that has exactly the same problem. Note that web.xml already contained a servlet put there by JBoss: org.jboss.web.tomcat.service.StatusServlet -- that Servlet does not give any errors in the log. As an experiment, I removed ".web" from that path and ended up getting exactly the same error as I'm getting on my Servlet. So it would appear that JBoss is not able to locate my Servlet given the specified path. Just for kicks, I've tried all sorts of other paths, like just plain MyImageServlet, controller.MyImageServlet, and more. Also, the servlet was originally named ImageServlet, but I attempted the name change thinking maybe there was some conflict with an existing ImageServlet. In all cases, the behavior is the same. After all of my research yesterday, I would say that this appears to be a problem with the JBoss servlet container, and I also learned that JBoss 5.1.0.GA should come bundled with a tomcat servlet container. I installed JBoss on my PC (Windows XP) myself less than 2 months ago (from jboss.org) and used it pretty much as is. Note that I am running on JDK 1.6, so I did use the jboss-jdk6 installation version. I am running on Windows, but I also deploy to a Linux virtual dedicated server. I deployed the current version of my program, including the servlet to the Linux box, but I get the exact same errors. I'm reluctant to just try reinstalling JBoss, since it's hard to place the blame on the JBoss installation when I get the same errors on two completely different installations. I am a bit suspicious of the bundled tomcat servlet container, because using eclipse, I haven't been able to locate any indication that there is a tomcat bundled into JBoss. I did locate servlet-api.jar in the JBoss common/lib directory. This is on the eclipse build path. One possibly useful note: I had previously used a standalone tomcat server for other projects using the same eclipse, so maybe it's some sort of eclipse issue? But, as I said, I do get the same errors when I deploy to the Linux server, and that deployment process just involves ftping files to the server and then putting them into the deployed war package and restarting JBoss. Thanks for any help you can provide.

    Read the article

  • WCF via SSL connectivity problems

    - by Brett Widmeier
    Hello, I am hosting a WCF service from inside a Windows service using WAS. When I set the service to listen on 127.0.0.1, I have connectivity from my local machine as well as from my network. However, when I set it to listen on my outbound interface port 443, I can no longer even see the wsdl by connecting with a browser. Strangely, I can connect to the service by using telnet. The cert I am using was generated for my interface by a CA, and I have successfully used this exact cert with this service before. When checking the application log, I see that the service starts without error and is listening on the correct interface. From this information, it seems to me that the config file is in a valid state, but somehow misconfigured for what I want. I have, however, previously deployed this same setup on other sites using this config file. In case it is helpful, below is my config file. Any thoughts? <!--<system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Warning, ActivityTracing" propagateActivity="true"> <listeners> <add type="System.Diagnostics.DefaultTraceListener" name="Default"> <filter type="" /> </add> <add name="ServiceModelTraceListener"> <filter type="" /> </add> </listeners> </source> </sources> <sharedListeners> <add initializeData="app_tracelog.svclog" type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ServiceModelTraceListener" traceOutputOptions="Timestamp"> <filter type="" /> </add> </sharedListeners> </system.diagnostics>--> <appSettings/> <connectionStrings/> <system.serviceModel> <!--<diagnostics> <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog ="1000" maxSizeOfMessageToLog="524288"/> </diagnostics>--> <bindings> <basicHttpBinding> <binding name="basicHttps"> <security mode="Transport"> <transport clientCredentialType="None"/> <message /> </security> </binding> </basicHttpBinding> </bindings> <services> <service behaviorConfiguration="ServiceBehavior" name="<fully qualified name of service>"> <endpoint address="" binding="basicHttpBinding" name="OrdersSoap" contract="<fully qualified name of contract>" bindingNamespace="http://emr.orders.com/WebServices" bindingConfiguration="basicHttps" /> <endpoint binding="mexHttpsBinding" address="mex" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="https://<external IP>/<name of service>>/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceMetadata httpsGetEnabled="False"/> <serviceDebug includeExceptionDetailInFaults="True" /> <dataContractSerializer maxItemsInObjectGraph="2147483646"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • WinMo > ASMX WebException - how to get details?

    - by eidylon
    Okay, we've got an application which consists of a website hosting several ASMX webservices, and a handheld application running on WinMo 6.1 which calls the webservices. Been developing in the office, everything works perfect. Now we've gone to install it at the client's and we got all the servers set up and the handhelds installed. However the handhelds are now no longer able to connect to the webservice. I added in extra code in my error handler to specifically trap WebException exceptions and handle them differently in the logging to put out extra information (.Status and .Response). I am getting out the status, which is returning a [7], or ProtocolError. However when I try to read out the ResponseStream (using WebException.Response.GetResponseStream), it is returning a stream with CanRead set to False, and I thus am unable to get any further details of what is going wrong. So I guess there are two things I am asking for help with... a) Any help with trying to get more information out of the WebException? b) What could be causing a ProtocolError exception? Things get extra complicated by the fact that the client has a full-blown log-in-enabled proxy setup going on-site. This was stopping all access to the website initially, even from a browser. So we entered in the login details in the network connection for HTTP on the WinMo device. Now it can get to websites fine. In fact, I can even pull up the webservice fine and call the methods from the browser (PocketIE). So I know the device is able to see the webservices okay via HTTP. But when trying to call them from the .NET app, it throws ProtocolError [7]. Here is my code which is logging the exception and failing to read out the Response from the WebException. Public Sub LogEx(ByVal ex As Exception) Try Dim fn As String = Path.Combine(ini.CorePath, "error.log") Dim t = File.AppendText(fn) t.AutoFlush = True t.WriteLine(<s>===== <%= Format(GetDateTime(), "MM/dd/yyyy HH:mm:ss") %> =====<%= vbCrLf %><%= ex.Message %></s>.Value) t.WriteLine() t.WriteLine(ex.ToString) t.WriteLine() If TypeOf ex Is WebException Then With CType(ex, WebException) t.WriteLine("STATUS: " & .Status.ToString & " (" & Val(.Status) & ")") t.WriteLine("RESPONSE:" & vbCrLf & StreamToString(.Response.GetResponseStream())) End With End If t.WriteLine("=".Repeat(50)) t.WriteLine() t.Close() Catch ix As Exception : Alert(ix) : End Try End Sub Private Function StreamToString(ByVal s As IO.Stream) As String If s Is Nothing Then Return "No response found." // THIS IS THE CASE BEING EXECUTED If Not s.CanRead Then Return "Unreadable response found." Dim rv As String = String.Empty, bytes As Long, buffer(4096) As Byte Using mem As New MemoryStream() Do While True bytes = s.Read(buffer, 0, buffer.Length) mem.Write(buffer, 0, bytes) If bytes = 0 Then Exit Do Loop mem.Position = 0 ReDim buffer(mem.Length) mem.Read(buffer, 0, mem.Length) mem.Seek(0, SeekOrigin.Begin) rv = New StreamReader(mem).ReadToEnd() mem.Close() End Using Return rv.NullOf("Empty response found.") End Function Thanks in advance!

    Read the article

  • Receiving POST data in ASP.NET

    - by grast
    Hi, I want to use ASP for code generation in a C# desktop application. To achieve this, I set up a simple host (derived from System.MarshalByRefObject) that processes a System.Web.Hosting.SimpleWorkerRequest via HttpRuntime.ProcessRequest. This processes the ASPX script specified by the incoming request (using System.Net.HttpListener to wait for requests). The client-part is represented by a System.ComponentModel.BackgroundWorker that builds the System.Net.HttpWebRequest and receives the response from the server. A simplified version of my client-part-code looks like this: private void SendRequest(object sender, DoWorkEventArgs e) { // create request with GET parameter var uri = "http://localhost:9876/test.aspx?getTest=321"; var request = (HttpWebRequest)WebRequest.Create(uri); // append POST parameter request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; var postData = Encoding.Default.GetBytes("postTest=654"); var postDataStream = request.GetRequestStream(); postDataStream.Write(postData, 0, postData.Length); // send request, wait for response and store/print content using (var response = (HttpWebResponse)request.GetResponse()) { using (var reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8)) { _processsedContent = reader.ReadToEnd(); Debug.Print(_processsedContent); } } } My server-part-code looks like this (without exception-handling etc.): public void ProcessRequests() { // HttpListener at http://localhost:9876/ var listener = SetupListener(); // SimpleHost created by ApplicationHost.CreateApplicationHost var host = SetupHost(); while (_running) { var context = listener.GetContext(); using (var writer = new StreamWriter(context.Response.OutputStream)) { // process ASP script and send response back to client host.ProcessRequest(GetPage(context), GetQuery(context), writer); } context.Response.Close(); } } So far all this works fine as long as I just use GET parameters. But when it comes to receiving POST data in my ASPX script I run into trouble. For testing I use the following script: // GET parameters are working: var getTest = Request.QueryString["getTest"]; Response.Write("getTest: " + getTest); // prints "getTest: 321" // don't know how to access POST parameters: var postTest1 = Request.Form["postTest"]; // Request.Form is empty?! Response.Write("postTest1: " + postTest1); // so this prints "postTest1: " var postTest2 = Request.Params["postTest"]; // Request.Params is empty?! Response.Write("postTest2: " + postTest2); // so this prints "postTest2: " It seems that the System.Web.HttpRequest object I'm dealing with in ASP does not contain any information about my POST parameter "postTest". I inspected it in debug mode and none of the members did contain neither the parameter-name "postTest" nor the parameter-value "654". I also tried the BinaryRead method of Request, but unfortunately it is empty. This corresponds to Request.InputStream==null and Request.ContentLength==0. And to make things really confusing the Request.HttpMethod member is set to "GET"?! To isolate the problem I tested the code by using a PHP script instead of the ASPX script. This is very simple: print_r($_GET); // prints all GET variables print_r($_POST); // prints all POST variables And the result is: Array ( [getTest] = 321 ) Array ( [postTest] = 654 ) So with the PHP script it works, I can access the POST data. Why does the ASPX script don't? What am I doing wrong? Is there a special accessor or method in the Response object? Can anyone give a hint or even know how to solve this? Thanks in advance.

    Read the article

  • Trac FastCGI + Python on dreamhost leaves zombie python running

    - by Katsuke
    Hello there, Recently I installed trac in one of my dreamhost domains. I followed the instructions of http://trac.mlalonde.net/wiki/CreamyTrac and everything worked perfectly. At least i thought that was the case. After a few days, i started to notice that i was getting random 500 pages. I quickly checked the error log, and found a bunch of: [Fri Apr 16 14:35:34 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi [Fri Apr 16 14:35:54 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi, referer: http://www.trac.****.com/login [Fri Apr 16 16:05:58 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi, referer: http://www.trac.****.com/timeline The trac instance has very low traffic so it is possible that this might have been triggered faster if there was more use of it. So next step i went to look at top and am amazed by what i see: (this is NOT exactly what i saw, i just reproduced this from memory) 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 I opened the trac site again and that changed to this: 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 28378 rl_inst 20 0 2608 1208 1008 S 0.0 0.0 0:00.00 dispatch.fcgi 28382 rl_inst 20 0 44248 15m 4012 S 0.0 0.4 0:02.19 python2.4 So the dispatch.fcgi was being called correctly and was spawning correctly other python2.4 process. After the IDLE time passed this is the results: 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 28382 rl_inst 20 0 44248 15m 4012 S 0.0 0.4 0:02.19 python2.4 dispatch.fcgi was gone but the corresponding python2.4 was still there o_O. I had started with 5 sleeping processes of python and after idle time i ended up with 6. I repeated this and found that it kept spawning more and more python2.4, this was indeed what was causing my 500s indirectly. Let me explain my findings. Am on a shared hosting so my processes get killed if they are way too many. So everytime i opened trac 2 new processes spawned and one remained, to the point that dreamhost was killing a random process when i reached my limit. Sometimes killing the python2.4 that actually was rendering the current page. Hence header premature ending, python is gone the .fcgi doesnt know what to do and throws an 500. I found a dirty solution to this. I changed my dispatch.fcgi to contain a line that killed any currently running python2.4 process and then spawn a new one. Since then i dont get any rogues what so ever. But i dont think this is the best solution, calling killall in every fcgi just seems wrong. Anyone has run into this issue and found a cleaner solution? Is there anything that i have overlooked?

    Read the article

  • ASP.NET- using System.IO.File.Delete() to delete file(s) from directory inside wwwroot?

    - by Jim S
    Hello, I have a ASP.NET SOAP web service whose web method creates a PDF file, writes it to the "Download" directory of the applicaton, and returns the URL to the user. Code: //Create the map images (MapPrinter) and insert them on the PDF (PagePrinter). MemoryStream mstream = null; FileStream fs = null; try { //Create the memorystream storing the pdf created. mstream = pgPrinter.GenerateMapImage(); //Convert the memorystream to an array of bytes. byte[] byteArray = mstream.ToArray(); //return byteArray; //Save PDF file to site's Download folder with a unique name. System.Text.StringBuilder sb = new System.Text.StringBuilder(Global.PhysicalDownloadPath); sb.Append("\\"); string fileName = Guid.NewGuid().ToString() + ".pdf"; sb.Append(fileName); string filePath = sb.ToString(); fs = new FileStream(filePath, FileMode.CreateNew); fs.Write(byteArray, 0, byteArray.Length); string requestURI = this.Context.Request.Url.AbsoluteUri; string virtPath = requestURI.Remove(requestURI.IndexOf("Service.asmx")) + "Download/" + fileName; return virtPath; } catch (Exception ex) { throw new Exception("An error has occurred creating the map pdf.", ex); } finally { if (mstream != null) mstream.Close(); if (fs != null) fs.Close(); //Clean up resources if (pgPrinter != null) pgPrinter.Dispose(); } Then in the Global.asax file of the web service, I set up a Timer in the Application_Start event listener. In the Timer's ElapsedEvent listener I look for any files in the Download directory that are older than the Timer interval (for testing = 1 min., for deployment ~20 min.) and delete them. Code: //Interval to check for old files (milliseconds), also set to delete files older than now minus this interval. private static double deleteTimeInterval; private static System.Timers.Timer timer; //Physical path to Download folder. Everything in this folder will be checked for deletion. public static string PhysicalDownloadPath; void Application_Start(object sender, EventArgs e) { // Code that runs on application startup deleteTimeInterval = Convert.ToDouble(System.Configuration.ConfigurationManager.AppSettings["FileDeleteInterval"]); //Create timer with interval (milliseconds) whose elapse event will trigger the delete of old files //in the Download directory. timer = new System.Timers.Timer(deleteTimeInterval); timer.Enabled = true; timer.AutoReset = true; timer.Elapsed += new System.Timers.ElapsedEventHandler(OnTimedEvent); PhysicalDownloadPath = System.Web.Hosting.HostingEnvironment.ApplicationPhysicalPath + "Download"; } private static void OnTimedEvent(object source, System.Timers.ElapsedEventArgs e) { //Delete the files older than the time interval in the Download folder. var folder = new System.IO.DirectoryInfo(PhysicalDownloadPath); System.IO.FileInfo[] files = folder.GetFiles(); foreach (var file in files) { if (file.CreationTime < DateTime.Now.AddMilliseconds(-deleteTimeInterval)) { string path = PhysicalDownloadPath + "\\" + file.Name; System.IO.File.Delete(path); } } } This works perfectly, with one exception. When I publish the web service application to inetpub\wwwroot (Windows 7, IIS7) it does not delete the old files in the Download directory. The app works perfect when I publish to IIS from a physical directory not in wwwroot. Obviously, it seems IIS places some sort of lock on files in the web root. I have tested impersonating an admin user to run the app and it still does not work. Any tips on how to circumvent the lock programmatically when in wwwroot? The client will probably want the app published to the root directory. Thank you very much.

    Read the article

  • How to restart RoR services after server has been rebooted

    - by Alan DeLonga
    Update I have been searching around to see what services would possibly need to be restarted in my project after reboot. One of them was thinking sphinx, which I finally got to the point where it logs: [Fri Nov 16 19:34:29.820 2012] [29623] accepting connections But I still cant run searchd or searchd --stop because there was no generated sphinx.conf file in the etc/sphinxsearch for more info refer to this open thread on thinking_sphinx after reboot I then turned to looking into restarting unicorn or thin based on some insight I got. The issue is when I check my gems I see one for thin AND unicorn. But when I try to start either one of them they have no file residing in etc/init.d/ where the nginx and sphinxsearch files reside... Would rebooting totally erase the files for an app server like thin or unicorn? We are hosted on Rackspace running ruby 1.9.2p290 rails (3.2.8, 3.2.7, 3.2.0) nginx/1.1.19 notice that there are gems for unicorn and thin but there is no unicorn.rb or thin.rb in my config folder for my app... I am still super lost if any one can give me some insight on some steps to take to figure this out I would really appreciate it. Anything would help, thanks for reading. thin 1.4.1 unicorn 4.3.1 When I run unicorn I get the same issue as referenced here : > /usr/local/bin/unicorn start /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:610:in `parse_rackup_file': rackup file (start) not readable (ArgumentError) from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:76:in `reload' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:67:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' from /usr/local/bin/unicorn:19:in `load' from /usr/local/bin/unicorn:19:in `<main>' When I run thin it just opens a command line prompt... /usr/local/bin/thin start >> Using rack adapter Other gems: * LOCAL GEMS * actionmailer (3.2.8, 3.2.7, 3.2.0) actionpack (3.2.8, 3.2.7, 3.2.0) activemodel (3.2.8, 3.2.7, 3.2.0) activerecord (3.2.8, 3.2.7, 3.2.0) activeresource (3.2.8, 3.2.7, 3.2.0) activesupport (3.2.8, 3.2.7, 3.2.0) arel (3.0.2) builder (3.0.0) bundler (1.1.5) carmen (1.0.0.beta2) carmen-rails (1.0.0.beta3) cocaine (0.2.1) coffee-rails (3.2.2) coffee-script (2.2.0) coffee-script-source (1.3.3) daemons (1.1.9) erubis (2.7.0) eventmachine (0.12.10) execjs (1.4.0) faraday (0.8.4) faraday_middleware (0.8.8) foursquare2 (1.8.2) geokit (1.6.5) hashie (1.2.0) hike (1.2.1) httparty (0.8.3) httpauth (0.1) i18n (0.6.0) journey (1.0.4) jquery-rails (2.0.2) json (1.7.4, 1.7.3) jwt (0.1.5) kgio (2.7.4) lastfm (1.8.0) libv8 (3.3.10.4 x86_64-linux) mail (2.4.4) mime-types (1.19, 1.18) minitest (1.6.0) multi_json (1.3.6) multi_xml (0.5.1) multipart-post (1.1.5) mysql2 (0.3.11) oauth2 (0.8.0) paperclip (3.1.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-ssl (1.3.2) rack-test (0.6.1) rails (3.2.8, 3.2.7, 3.2.0) railties (3.2.8, 3.2.7, 3.2.0) raindrops (0.10.0, 0.9.0) rake (0.9.2.2, 0.8.7) rdoc (3.12, 2.5.8) riddle (1.5.3) sass (3.2.0, 3.1.19) sass-rails (3.2.5) sprockets (2.1.3) sqlite3 (1.3.6) sqlite3-ruby (1.3.3) therubyracer (0.10.2, 0.10.1) thin (1.4.1) thinking-sphinx (2.0.10) thor (0.16.0, 0.15.4, 0.14.6) tilt (1.3.3) treetop (1.4.10) tzinfo (0.3.33) uglifier (1.2.7, 1.2.4) unicorn (4.3.1) xml-simple (1.1.1) I am working on a project that was built by another group. I made some modifications to a constants file in the config folder (changing some values for arrays that populated some drop down fields), but the app had to be rebooted before those changes would be recognized. The hosting is through Rackspace, we rebooted through the option on their site. I contacted them and checked the status of our server, the port is open and operational. The problem is the app is not running when you go to the address for the site. Then when I put in the ip address of the server it just says "Welcome to Nginx". But in a log files I see: [Thu Nov 15 02:34:37.945 2012] [15916] caught SIGTERM, shutting down [Thu Nov 15 02:34:37.996 2012] [15916] shutdown complete I am not very versed in server side set up. I have also never worked on a Rails project that had to have specific services started before the application will start. Any insight as to how to figure out what services need to be restarted and how to go about restarting them would be greatly appreciated. I feel kind of dead in the water at this point... Thanks, Alan

    Read the article

  • MySQL port 3306 blocked in csf yet can still telnet to port 3306 from external host

    - by Neek
    We have a Centos 6 VPS that was recently migrated to a new machine within the same web hosting company. It's running WHM/cPanel and has csf/lfd installed. csf is set up with mostly vanilla config. I'm no iptables expert, csf has not let me down before. If a port isn't in the TCP_IN list, it should be blocked on the firewall by iptables. My problem is that I can telnet to port 3306 from an external host, yet I think iptables ought to be blocking 3306 because of csf's rules. We are now failing a security check because of this open port. (this output is obfuscated to protect the innocent: www.ourhost.com is the host with the firewall problem) [root@nickfenwick log]# telnet www.ourhost.com 3306 Trying 158.255.45.107... Connected to www.ourhost.com. Escape character is '^]'. HHost 'nickfenwick.com' is not allowed to connect to this MySQL serverConnection closed by foreign host. So the connection is established, and MySQL refuses the connection due to its configuration. I need the network connection to be refused at the firewall level, before it reaches MySQL. Using WHM's csf web UI I can see 'Firewall Configuration' includes a fairly sensible TCP_IN line: TCP_IN: 20,21,22,25,53,80,110,143,222,443,465,587,993,995,2077,2078,2082,2083,2086,2087,2095,2096,8080 (lets ignore that I could trim that a little for now, my concern is that 3306 is not listed in that list) When csf is restarted it logs the usual slew of output as it sets up iptables rules, for example what looks like it blocking all traffic and then allowing specific ports like SSH on 22: [cut] DROP all opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 [cut] ACCEPT tcp opt -- in !lo out * 0.0.0.0/0 -> 0.0.0.0/0 state NEW tcp dpt:22 [cut] I can see that iptables is running, service iptables status returns a long list of firewall rules. Here is my Chain INPUT section from service iptables status, hopefully that's enough to show how the firewall is configured. Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 acctboth all -- 0.0.0.0/0 0.0.0.0/0 2 ACCEPT tcp -- 217.112.88.10 0.0.0.0/0 tcp dpt:53 3 ACCEPT udp -- 217.112.88.10 0.0.0.0/0 udp dpt:53 4 ACCEPT tcp -- 217.112.88.10 0.0.0.0/0 tcp spt:53 5 ACCEPT udp -- 217.112.88.10 0.0.0.0/0 udp spt:53 6 ACCEPT tcp -- 8.8.4.4 0.0.0.0/0 tcp dpt:53 7 ACCEPT udp -- 8.8.4.4 0.0.0.0/0 udp dpt:53 8 ACCEPT tcp -- 8.8.4.4 0.0.0.0/0 tcp spt:53 9 ACCEPT udp -- 8.8.4.4 0.0.0.0/0 udp spt:53 10 ACCEPT tcp -- 8.8.8.8 0.0.0.0/0 tcp dpt:53 11 ACCEPT udp -- 8.8.8.8 0.0.0.0/0 udp dpt:53 12 ACCEPT tcp -- 8.8.8.8 0.0.0.0/0 tcp spt:53 13 ACCEPT udp -- 8.8.8.8 0.0.0.0/0 udp spt:53 14 LOCALINPUT all -- 0.0.0.0/0 0.0.0.0/0 15 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 16 INVALID tcp -- 0.0.0.0/0 0.0.0.0/0 17 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 18 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:20 19 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:21 20 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 21 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:25 22 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:53 23 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 24 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:110 25 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:143 26 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:222 27 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 28 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:465 29 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:587 30 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:993 31 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:995 32 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2077 33 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2078 34 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2082 35 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2083 36 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2086 37 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2087 38 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2095 39 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2096 40 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:8080 41 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:20 42 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:21 43 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:53 44 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:222 45 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:8080 46 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 47 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 0 48 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 49 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 50 LOGDROPIN all -- 0.0.0.0/0 0.0.0.0/0 What's the next thing to check?

    Read the article

  • Error When Loading Images on Local Host Test Server

    - by ke4ktz
    I have a peculiar problem that I just can't seem to find an explanation. I'm working on an AngularJS site for our family and am integrating data from various web services. Currently I am working on the photos section which will integrate in photos from our Flickr account. I have a main page which lists the various photo sets and displays the set's primary photo along with the title. (Note: I'm using the Flickr 'extras' parameter to return the primary photo's URL in the API calls.) <div data-ng-repeat="p in vm.photoSets"> <a ng-href="#/photos/{{p.id}}"> <img ng-src="{{p.primary_photo_extras.url_s}}"></img> </a> <h4>{{p.title._content}}</h4> </div> When clicking on the photo, the routing will display a page with a list of all the photos from that set, showing the image and the title. <div data-ng-repeat="p in vm.photoSetData.photo"> <a ng-href="#/photos/{{vm.photoSetId}}/{{p.id}}" <img ng-src="{{p.url_s}}"></img> </a> <h4>{{p.title}}</h4> </div> Now, here's where the problem is occuring. When I upload the code to my public website on my hosting provider, everything works just fine. Both pages display their respective photos. However, when I attempt to run the site on my local system, either in MAMP or NodeJS (using http-server), the second page gives me an error for each image: Error: [$interpolate:interr] Can't interpolate: {{p.url_s}} Error: [$sce:insecurl] Blocked loading resource from url not allowed by $sceDelegate policy. URL: https://farm1.staticflickr.com/37/82749767_e82ff60ce3_m.jpg http://errors.angularjs.org/1.2.9/$sce/insecurl?p0=https%3A%2F%2Ffarm1.staticflickr.com%2F37%2F82749767_e82ff60ce3_m.jpg http://errors.angularjs.org/1.2.9/$interpolate/interr?p0=%7B%7Bp.url_s%7D%7D&p1=Error%3A%20%5B%24sce%3Ainsecurl%5D%20Blocked%20loading%20resource%20from%20url%20not%20allowed%20by%20%24sceDelegate%20policy.%20%20URL%3A%20https%3A%2F%2Ffarm1.staticflickr.com%2F37%2F82749767_e82ff60ce3_m.jpg%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.2.9%2F%24sce%2Finsecurl%3Fp0%3Dhttps%253A%252F%252Ffarm1.staticflickr.com%252F37%252F82749767_e82ff60ce3_m.jpg minErr/<@http://localhost/scripts/angular.js:78 $interpolate/fn@http://localhost/scripts/angular.js:8254 $RootScopeProvider/this.$get</Scope.prototype.$digest@http://localhost/scripts/angular.js:11800 $RootScopeProvider/this.$get</Scope.prototype.$apply@http://localhost/scripts/angular.js:12061 done@http://localhost/scripts/angular.js:7843 completeRequest@http://localhost/scripts/angular.js:8026 createHttpBackend/</jsonpDone<@http://localhost/scripts/angular.js:7942 jsonpReq/doneWrapper@http://localhost/scripts/angular.js:8039 jsonpReq/script.onerror@http://localhost/scripts/angular.js:8053 The API call to Flickr is successful and returns the correct data. In fact, the image title does display! I've tested it with Firefox, Safari and Chrome...all three browsers fail. I cannot find any explanation as to why it would work remotely but fail locally. Also, the images show up on the first page, but not on the second, even though one of the images on the second page is the same image URL as on the first page. Even going directly to the second page, bypassing the first page, still fails. Any ideas on how to fix this? It would be nice to test locally without having to upload to the server each time I make a change. Update: I have shut off the $sce security to see if that was causing the issue. Although it resulted in turning the error off, the files still don't load on the local test server. I have used the developer tools' network monitor and it doesn't even show an attempt to retrieve the files. AngularJS appears to shut down the retrieval, although the correct path shows up in the DOM.

    Read the article

  • WCF zero application endpoint exception

    - by Lijo
    Hi Team, I am just trying with various WCF(in .Net 3.0) scenarios. I am using self hosting. I am getting an exception as "Service 'MyServiceLibrary.NameDecorator' has zero application (non-infrastructure) endpoints. This might be because no configuration file was found for your application, or because no service element matching the service name could be found in the configuration file, or because no endpoints were defined in the service element." I have a config file as follows (which has an endpoint) <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Lijo.Samples.NameDecorator" behaviorConfiguration="WeatherServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8010/ServiceModelSamples/FreeServiceWorld"/> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="Lijo.Samples.IElementaryService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WeatherServiceBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="False"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> And a Host as using System.ServiceModel; using System.ServiceModel.Dispatcher; using System.ServiceModel.Channels; using System.ServiceModel.Description; using System.Runtime.Serialization; namespace MySelfHostConsoleApp { class Program { static void Main(string[] args) { System.ServiceModel.ServiceHost myHost = new ServiceHost(typeof(MyServiceLibrary.NameDecorator)); myHost.Open(); Console.ReadLine(); } } } My Service is as follows using System.ServiceModel; using System.Runtime.Serialization; namespace MyServiceLibrary { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] CompanyLogo GetLogo(); } public class NameDecorator : IElementaryService { public CompanyLogo GetLogo() { CircleType cirlce = new CircleType(); CompanyLogo logo = new CompanyLogo(cirlce); return logo; } } [DataContract] public abstract class IShape { public abstract string SelfExplain(); } [DataContract(Name = "Circle")] public class CircleType : IShape { public override string SelfExplain() { return "I am a Circle"; } } [DataContract(Name = "Triangle")] public class TriangleType : IShape { public override string SelfExplain() { return "I am a Triangle"; } } [DataContract] [KnownType(typeof(CircleType))] [KnownType(typeof(TriangleType))] public class CompanyLogo { private IShape m_shapeOfLogo; [DataMember] public IShape ShapeOfLogo { get { return m_shapeOfLogo; } set { m_shapeOfLogo = value; } } public CompanyLogo(IShape shape) { m_shapeOfLogo = shape; } } } Could you please help me to understand what I am missing here? Thanks Lijo

    Read the article

  • Software: Launching League of Legends spectator mode from Command Line (Mac)

    - by Alex Popov
    Background: tl;dr at the end League of Legends has a spectator mode, in which you can watch someone else's game (essentially a replay) with a 3 minute delay. Popular LoL website OP.GG has figured out a clever way of hosting these spectator games on their own servers, thereby making them replayable, as opposed to only being available while the game is on (as Riot does it). If you request a replay from OP.GG, it sends a batch file which looks for where the League is situated and then the magic happens: @start "" "League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1" This works fine on Windows. I'm trying to get it to work on Mac (which has an official client). First I tried running the same command by hand, (split for convenience) /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends 8393 LoLLauncher \ /Applications/ ... /LolClient spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1 Running this, however, just starts the LoLLauncher, which closes all the active League processes. The exactly same thing happens if I just call /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends Next I tried seeing what actually happens when Spectator mode is initiated so I ran $ ps -axf | grep -i lol which showed UID PID PPID C STIME TTY TIME CMD 503 3085 1 0 Wed02pm ?? 0:00.00 (LolClient) 503 24607 1 0 9:19am ?? 0:00.98 /Applications/League of Legends.app/Contents/LOL/RADS/system/UserKernel.app/Contents/MacOS/UserKernel updateandrun lol_launcher LoLLauncher.app 503 24610 24607 0 9:19am ?? 1:08.76 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_launcher/releases/0.0.0.122/deploy/LoLLauncher.app/Contents/MacOS/LoLLauncher 503 24611 24610 0 9:19am ?? 1:23.02 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient -runtime .\ -nodebug META-INF\AIR\application.xml .\ -- 8393 503 24927 24610 0 9:44am ?? 0:03.37 /Applications/League of Legends.app/Contents/LoL/RADS/solutions/lol_game_client_sln/releases/0.0.0.117/deploy/LeagueOfLegends.app/Contents/MacOS/LeagueofLegends 8394 LoLLauncher /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient spectator 216.133.234.17:8088 Yn1oMX/n3LpXNebibzUa1i3Z+s2HV0ul 1400781241 NA1 Of Interest: there is (LolClient) which I cannot kill by it's PID. UserKernel updateandrun lol_launcher LoLLauncher.app is launched first. LoLLauncher is launched by the UserKernel (as we can see from the PPID) The very long command (PID: 24927) is how Spectator mode is launched, and is also launched by UserKernel. Spectator mode is launched in exactly the same way that the OP.GG .bat wanted to, with the only difference that Spectator mode connects to Riot instead of OP.GG's spectate server. I tried attaching GDB to the LolClient, but I couldn't get anything meaningful from it since it's an Adobe AIR application (and I've never used GDB with code other than mine own). Next I ran dtruss -a -b 100m -f -p $PID on everything I could think of: the LolClient, the LolLauncher and the UserKernel and skimmed the half a million lines produced. I found stuff like the GET request used to get the information of the game to spectate, but I could not see any launch of the equivalent of League of Legends.exe with spectator options. Finally, I ran lsof | grep -i lol to see if anything else was opened in the process, but didn't find anything that seemed appropriate. Open were UserKernel, LolLauncher, LolClient, Adobe AIR, LeagueofLegends and then Bugsplat, all of which are expected. None of this seemed especially relevant to figuring out how LeagueofLegends was opened into spectator mode. It obviously can be done, since Spectator mode is accessible from within the client. It seems likely that it can be done from the CLI, since Windows can do it and the clients are supposed to equals. Unless I'm missing something in the difference between how UNIX and Windows handle CLI application launches. My question is if there are any other things I can try to figure out how to launch Spectator mode myself. tl;dr: Trying to get into spectator mode from the CLI. It's possible on Windows (see first code block) but it just restarts League on Mac. What else can I try to find what call is made, and how to reproduce it? PS: Please let me know how I can improve this question or its formatting, I'd love to use StackOverflow/SuperUser, but as the guys said on the podcast this week (Ep. 59) it's very intimidating. Sorry for posting this on StackOverflow the first time :(

    Read the article

  • Bridging LXC containers to host eth0 so they can have a public IP

    - by Vianney Stroebel
    UPDATE: I found the solution there: http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge#No_traffic_gets_trough_.28except_ARP_and_STP.29 # cd /proc/sys/net/bridge # ls bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-call-ip6tables bridge-nf-filter-vlan-tagged # for f in bridge-nf-*; do echo 0 $f; done But I'd like to have expert opinions on this: is it safe to disable all bridge-nf-*? What are they here for? END OF UPDATE I need to bridge LXC containers to the physical interface (eth0) of my host, reading numerous tutorials, documents and blog posts on the subject. I need the containers to have their own public IP (which I've previously done KVM/libvirt). After two days of searching and trying, I still can't make it work with LXC containers. The host runs a freshly installed Ubuntu Server Quantal (12.10) with only libvirt (which I'm not using here) and lxc installed. I created the containers with : lxc-create -t ubuntu -n mycontainer So they also run Ubuntu 12.10. Content of /var/lib/lxc/mycontainer/config is: lxc.utsname = mycontainer lxc.mount = /var/lib/lxc/test/fstab lxc.rootfs = /var/lib/lxc/test/rootfs lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.veth.pair = vethmycontainer lxc.network.ipv4 = 179.43.46.233 lxc.network.hwaddr= 02:00:00:86:5b:11 lxc.devttydir = lxc lxc.tty = 4 lxc.pts = 1024 lxc.arch = amd64 lxc.cap.drop = sys_module mac_admin mac_override lxc.pivotdir = lxc_putold # uncomment the next line to run the container unconfined: #lxc.aa_profile = unconfined lxc.cgroup.devices.deny = a # Allow any mknod (but not using the node) lxc.cgroup.devices.allow = c *:* m lxc.cgroup.devices.allow = b *:* m # /dev/null and zero lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm # consoles lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm #lxc.cgroup.devices.allow = c 4:0 rwm #lxc.cgroup.devices.allow = c 4:1 rwm # /dev/{,u}random lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm # rtc lxc.cgroup.devices.allow = c 254:0 rwm #fuse lxc.cgroup.devices.allow = c 10:229 rwm #tun lxc.cgroup.devices.allow = c 10:200 rwm #full lxc.cgroup.devices.allow = c 1:7 rwm #hpet lxc.cgroup.devices.allow = c 10:228 rwm #kvm lxc.cgroup.devices.allow = c 10:232 rwm Then I changed my host /etc/network/interfaces to: auto lo iface lo inet loopback auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 92.281.86.226 netmask 255.255.255.0 network 92.281.86.0 broadcast 92.281.86.255 gateway 92.281.86.254 dns-nameservers 213.186.33.99 dns-search ovh.net When I try command line configuration ("brctl addif", "ifconfig eth0", etc.) my remote host becomes inaccessible and I have to hard reboot it. I changed the content of /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces to: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 179.43.46.233 netmask 255.255.255.255 broadcast 178.33.40.233 gateway 92.281.86.254 It takes several minutes for mycontainer to start (lxc-start -n mycontainer). I tried replacing gateway 92.281.86.254 by : post-up route add 92.281.86.254 dev eth0 post-up route add default gw 92.281.86.254 post-down route del 92.281.86.254 dev eth0 post-down route del default gw 92.281.86.254 My container then starts instantly. But whatever configuration I set in /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces, I cannot ping from mycontainer to any IP (including the host's) : ubuntu@mycontainer:~$ ping 92.281.86.226 PING 92.281.86.226 (92.281.86.226) 56(84) bytes of data. ^C --- 92.281.86.226 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5031ms And my host cannot ping the container: root@host:~# ping 179.43.46.233 PING 179.43.46.233 (179.43.46.233) 56(84) bytes of data. ^C --- 179.43.46.233 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4000ms My container's ifconfig: ubuntu@mycontainer:~$ ifconfig eth0 Link encap:Ethernet HWaddr 02:00:00:86:5b:11 inet addr:179.43.46.233 Bcast:255.255.255.255 Mask:0.0.0.0 inet6 addr: fe80::ff:fe79:5a31/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:6 overruns:0 frame:0 TX packets:54 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4070 (4.0 KB) TX bytes:4168 (4.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2496 (2.4 KB) TX bytes:2496 (2.4 KB) My host's ifconfig: root@host:~# ifconfig br0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b inet addr:92.281.86.226 Bcast:91.121.67.255 Mask:255.255.255.0 inet6 addr: fe80::4e72:b9ff:fe43:652b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1453 errors:0 dropped:18 overruns:0 frame:0 TX packets:1630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:145125 (145.1 KB) TX bytes:299943 (299.9 KB) eth0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3178 errors:0 dropped:0 overruns:0 frame:0 TX packets:1637 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:298263 (298.2 KB) TX bytes:309167 (309.1 KB) Interrupt:20 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:300 (300.0 B) TX bytes:300 (300.0 B) vethtest Link encap:Ethernet HWaddr fe:0d:7f:3e:70:88 inet6 addr: fe80::fc0d:7fff:fe3e:7088/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4168 (4.1 KB) TX bytes:4250 (4.2 KB) virbr0 Link encap:Ethernet HWaddr de:49:c5:66:cf:84 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) I have disabled lxcbr0 (USE_LXC_BRIDGE="false" in /etc/default/lxc). root@host:~# brctl show bridge name bridge id STP enabled interfaces br0 8000.4c72b943652b no eth0 vethtest I have configured the IP 179.43.46.233 to point to 02:00:00:86:5b:11 in my hosting provider (OVH) config panel. (The IPs in this post are not the real ones.) Thanks for reading this long question! :-) Vianney

    Read the article

  • Nginx no static files after update

    - by SomeoneS
    First, i must say that i am not expert in server administration, my site was setup by hosting admins (that i cannot contact anymore). Few days ago, i updated Nginx to latest version (admin told me that it is safe to do). But after that, my site serves only html content, no CSS, images, JS. If i try to open some image i get message "Wellcome to Nginx" (same thin if i try to open static.mysitedomain.com). More details: Site has static. subdomain, but static files are in same directory as they used to be before setting up static files. I was googling for some solutions, i tried to change something in /etc/nginx/, but no luck. I feel that this is some minor configuration problem, any ideas? EDIT: Here is /etc/nginx/nginx.conf file content: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Here is /etc/nginx/sites-enabled/default file content: server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ /index.html; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ /index.html; # } #}

    Read the article

  • Sometimes big delay when using PHP mail()

    - by Robbert Dam
    I have a website which processes orders from a Windows application. This works as follows: User clicks "Order now" in the windows app App uploads a file with POST to a PHP script The script immediately calls the PHP mail() function (order is not stored in a db) This works fine most of the time. However, sometimes a big delay occurs (several days). Customers calls why the product has not yet been delivered. E-mail headers of delayed mail follows: Microsoft Mail Internet Headers Version 2.0 Received: from barracuda.nkl.nl ([10.0.0.1]) by smtp.nkl.nl with Microsoft SMTPSVC(6.0.3790.3959); Wed, 26 May 2010 16:26:51 +0200 X-ASG-Debug-ID: 1274883818-2f8800000000-X58hIK X-Barracuda-URL: http://10.0.0.1:8000/cgi-bin/mark.cgi Received: from server45.firstfind.nl (localhost [127.0.0.1]) by barracuda.nkl.nl (Spam & Virus Firewall) with ESMTP id ECAFD15776A for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) Received: from server45.firstfind.nl (server45.firstfind.nl [93.94.226.76]) by barracuda.nkl.nl with ESMTP id 85bAT2AU58kkxjPb for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) X-Barracuda-Envelope-From: [email protected] Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 Date: Wed, 19 May 2010 11:47:10 +0200 Message-Id: <[email protected]> To: [email protected] X-ASG-Orig-Subj: easyfit - ref: Hoen3443 Subject: easyfit - ref: Hoen3443 X-PHP-Script: www.nklseminar.nl/emailer/upload.php for 77.61.220.217 From: [email protected] Content-type: text/html X-Virus-Scanned: by amavisd-new X-Barracuda-Connect: server45.firstfind.nl[93.94.226.76] X-Barracuda-Start-Time: 1274883820 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by Barracuda Spam & Virus Firewall at nkl.nl X-Barracuda-Spam-Score: 0.92 X-Barracuda-Spam-Status: No, SCORE=0.92 using global scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=DATE_IN_PAST_96_XX, DATE_IN_PAST_96_XX_2, HTML_MESSAGE, MIME_HEADER_CTYPE_ONLY, MIME_HTML_ONLY, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.2.30817 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 NO_REAL_NAME From: does not include a real name 0.01 DATE_IN_PAST_96_XX Date: is 96 hours or more before Received: date 0.00 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 0.00 HTML_MESSAGE BODY: HTML included in message 0.86 MIME_HEADER_CTYPE_ONLY 'Content-Type' found without required MIME headers 2.07 DATE_IN_PAST_96_XX_2 DATE_IN_PAST_96_XX_2 Return-Path: [email protected] X-OriginalArrivalTime: 26 May 2010 14:26:51.0343 (UTC) FILETIME=[7B80DDF0:01CAFCDF] The delay seems to occur here: Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 I've reported this issue various times to the web hosting service that hosts my website. They say the delay does not occur in their network (impossible). But they do confirm that the e-mail is first seen in their mail server on May 26, which is 7 days after the mail has been composed. The order is marked with the timestamp of the user's local PC, which also matches May 19 (so it's not a PC clock problem) It's also interesting to see that all delayed mails (orders were placed on different days) come in at once. So I suddenly receive 14 e-mail in my mailbox from various days. Any idea were this delay may be introduced? Could there be a bug in my PHP code that causes this? (I cannot believe I can introduce a loop of 7 days in my PHP code)

    Read the article

  • web application with secured sections, sessions and related trouble

    - by spirytus
    I would like to create web application with admin/checkout sections being secured. Assuming I have SSL set up for subdomain.mydomain.com I would like to make sure that all that top-secret stuff ;) like checkout pages and admin section is transferred securely. Would it be ok to structure my application as below? subdomain.mydomain.com adminSectionFolder adminPage1.php adminPage2.php checkoutPagesFolder checkoutPage1.php checkoutPage2.php checkoutPage3.php homepage.php loginPage.php someOtherPage.php someNonSecureFolder nonSecurePage1.php nonSecurePage2.php nonSecurePage3.php imagesFolder image1.jpg image2.jpg image3.jpg Users would access my web application via http as there is no need for SSL for homepage and similar. Checkout/admin pages would have to be accessed via https though (that I would ensure via .htaccess redirects). I would also like to have login form on every page of the site, including non-secure pages. Now my questions are: if I have form on non-secure page e.g http://subdomain.mydomain.com/homepage.php and that form sends data to http://subdomain.mydomain.com/loginPage.php, is data being send encrypted as if it were sent from https://subdomain.mydomain.com/homepage.php? I do realize users will not see padlock, but browser still should encrypt it, is it right? If on secure page loginPage.php (or any other accessed via https for that instance) I created session, session ID would be assigned, and in case of my web app. something like username of the logged in user. Would I be able to access these session variable from http://subdomain.mydomain.com/homepage.php to for example display greeting message? If session ID is stored in cookies then it would be trouble I assume, but could someone clarify how it should be done? It seems important to have username and password send over SSL. Related to above question I think.. would it actually make any sense to have login secured via SSL so usenrame/password would be transferred securely, and then session ID being transferred with no SSL? I mean wouldnt it be the same really if someone caught username and password being transferred, or caught session ID? Please let me know if I make sense here cause it feels like I'm missing something important. EDIT: I came up with idea but again please let me know if that would work. Having above, so assuming that sharing session between http and https is as secure as login in user via plain http (not https), I guess on all non secure pages, like homepage etc. I could check if user is already logged in, and if so from php redirect to https version of same page. So user fills in login form from homepage.php, over ssl details are send to backend so probably https://.../homepage.php. Trying to access http://.../someOtherPage.php script would always check if session is created and if so redirect user to https version of this page so https://.../someOtherPage.php. Would that work? 4.To avoid browser popping message "this page contains non secure items..." my links to css, images and all assets, e.g. in case of http://subdomain.mydomain.com/checkoutPage1.php should be absolute so "/images/image1.jpg" or relative so "../images/image1.jpg"? I guess one of those would have to work :) wow that's long post, thanks for your patience if you got that far and any answers :) oh yeh and I use php/apache on shared hosting

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • nginx bad gateway 502 with mono fastcgi

    - by Bradley Lederholz Leatherwood
    Hello so I have been trying to get my website to run on mono (on ubuntu server) and I have followed these tutorials almost to the letter: However when my directory is not blank fastcgi logs reveal this: Notice Beginning to receive records on connection. Error Failed to process connection. Reason: Exception has been thrown by the target of an invocation. I am not really sure what this means, and depending on what I do I can get another error that tells me the resource cannot be found: The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /Default.aspx/ Version information: Mono Runtime Version: 2.10.8 (tarball Thu Aug 16 23:46:03 UTC 2012) ASP.NET Version: 4.0.30319.1 If I should provide some more information please let me know. Edit: I am now getting a nginx gateway error. My nginx configuration file looks like this: server { listen 2194; server_name localhost; access_log $HOME/WWW/nginx.log; location / { root $HOME/WWW/dev/; index index.html index.html default.aspx Default.aspx Index.cshtml; fastcgi_index Views/Home/; fastcgi_pass 127.0.0.1:8000; include /etc/nginx/fastcgi_params; } } Running the entire thing with xsp4 I have discovered what the "Exception has been thrown by the target of an invocation." Handling exception type TargetInvocationException Message is Exception has been thrown by the target of an invocation. IsTerminating is set to True System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. Server stack trace: at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in :0 at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in :0 at System.Runtime.Serialization.ObjectRecord.LoadData (System.Runtime.Serialization.ObjectManager manager, ISurrogateSelector selector, StreamingContext context) [0x00000] in :0 at System.Runtime.Serialization.ObjectManager.DoFixups () [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadNextObject (System.IO.BinaryReader reader) [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectGraph (BinaryElement elem, System.IO.BinaryReader reader, Boolean readHeaders, System.Object& result, System.Runtime.Remoting.Messaging.Header[]& headers) [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.NoCheckDeserialize (System.IO.Stream serializationStream, System.Runtime.Remoting.Messaging.HeaderHandler handler) [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize (System.IO.Stream serializationStream) [0x00000] in :0 at System.Runtime.Remoting.RemotingServices.DeserializeCallData (System.Byte[] array) [0x00000] in :0 at (wrapper xdomain-dispatch) System.AppDomain:DoCallBack (object,byte[]&,byte[]&) Exception rethrown at [0]: --- System.ArgumentException: Couldn't bind to method 'SetHostingEnvironment'. at System.Delegate.GetCandidateMethod (System.Type type, System.Type target, System.String method, BindingFlags bflags, Boolean ignoreCase, Boolean throwOnBindFailure) [0x00000] in :0 at System.Delegate.CreateDelegate (System.Type type, System.Type target, System.String method, Boolean ignoreCase, Boolean throwOnBindFailure) [0x00000] in :0 at System.Delegate.CreateDelegate (System.Type type, System.Type target, System.String method) [0x00000] in :0 at System.DelegateSerializationHolder+DelegateEntry.DeserializeDelegate (System.Runtime.Serialization.SerializationInfo info) [0x00000] in :0 at System.DelegateSerializationHolder..ctor (System.Runtime.Serialization.SerializationInfo info, StreamingContext ctx) [0x00000] in :0 at (wrapper managed-to-native) System.Reflection.MonoCMethod:InternalInvoke (System.Reflection.MonoCMethod,object,object[],System.Exception&) at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in :0 --- End of inner exception stack trace --- at (wrapper xdomain-invoke) System.AppDomain:DoCallBack (System.CrossAppDomainDelegate) at (wrapper remoting-invoke-with-check) System.AppDomain:DoCallBack (System.CrossAppDomainDelegate) at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] in :0 at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] in :0 at Mono.WebServer.XSP.Server.RealMain (System.String[] args, Boolean root, IApplicationHost ext_apphost, Boolean quiet) [0x00000] in :0 at Mono.WebServer.XSP.Server.Main (System.String[] args) [0x00000] in :0

    Read the article

  • Getting 404 in Android app while trying to get xml from localhost

    - by Patrick
    This must be something really stupid, trying to solve this issue for a couple of days now and it's really not working. I searched everywhere and there probably is someone with the same problem, but I can't seem to find it. I'm working on an Android app and this app pulls some xml from a website. Since this website is down a lot, I decided to save it and run it locally. Now what I did; - I downloaded the kWs app for hosting the downloaded xml file - I put the file in the right directory and could access it through the mobile browser, but not with my app (same code as I used with pulling it from some other website, not hosted by me, only difference was the URL obviously) So I tried to host it on my PC and access it with my app from there. Again the same results, the mobile browsers had no problem finding it, but the app kept saying 404 Not Found: "The requested URL /test.xml&parama=Someone&paramb= was not found on this server." Note: Don't mind the 2 parameters I am sending, I needed that to get the right stuff from the website that wasn't hosted by me. My code: public String getStuff(String name){ String URL = "http://10.0.0.8/test.xml"; ArrayList<NameValuePair> params = new ArrayList<NameValuePair>(2); params.add(new BasicNameValuePair("parama", name)); params.add(new BasicNameValuePair("paramb", "")); APIRequest request = new APIRequest(URL, params); try { RequestXML rxml = new RequestXML(); AsyncTask<APIRequest, Void, String> a = rxml.execute(request); ... } catch(Exception e) { e.printStackTrace(); } return null; } That should be working correctly. Now the RequestXML class part: class RequestXML extends AsyncTask<APIRequest, Void, String>{ @Override protected String doInBackground(APIRequest... uri) { HttpClient httpclient = new DefaultHttpClient(); String completeUrl = uri[0].url; // ... Add parameters to URL ... HttpGet request = null; try { request = new HttpGet(new URI(completeUrl)); } catch (URISyntaxException e1) { e1.printStackTrace(); } HttpResponse response; String responseString = ""; try { response = httpclient.execute(request); StatusLine statusLine = response.getStatusLine(); if(statusLine.getStatusCode() == HttpStatus.SC_OK){ // .. It crashes here, because statusLine.getStatusCode() returns a 404 instead of a 200. The xml is just plain xml, nothing special about it. I changed the contents of the .htaccess file into "ALLOW FROM ALL" (works, cause the browser on my mobile device can access it and shows the correct xml). I am running Android 4.0.4 and I am using the default browser AND chrome on my mobile device. I am using MoWeS to host the website on my PC. Any help would be appreciated and if you need to know anything before you can find an answer to this problem, I'll be more than happy to give you that info. Thank you for you time! Cheers.

    Read the article

  • Mail Server on Google Apps, Hotmail Blocks

    - by detay
    My company provides private research tools (sites) for companies. Each site has its own domain name and users. These sites need to be able to send outbound mail for notifications such as password reminders. We started using Google Apps for this purpose, but lately we are having tremendous problems with mailing, especially to hotmail. I added the SPF record as required. Mxtoolbox says my domains are not in any blacklists. I was told to contact https://postmaster.live.com/snds/addnetwork.aspx and add my network. Given that I am using google's mail servers for sending mail, should I add their IP address for registering my domain? Or should I add my server's IP even though I am not hosting the mail server on it? Here's an example of a returned message; > Delivered-To: [email protected] Received: by 10.112.1.195 with SMTP > id 3csp62757lbo; > Mon, 10 Dec 2012 10:42:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; > d=gmail.com; s=20120113; > h=mime-version:from:to:x-failed-recipients:subject:message-id:date > :content-type; > bh=x41JiCH75hsonh+5c/kzwIs7R8u4Hum2u396lkV3g2w=; > b=v+bBPvufqfVkNz2UrnhyyoGj1Cuwf6x/yMj0IXJgH27uJU7TqBOvbwnNHf33sQ7B9T > uGzxwryC2fmxBh72cGz1spwWK348nUp6KK73MXKnF8mpc3nPQ8Ke+EQpSPeJdq/7oZvd > scZTGpy//IiGEUDU5bJ7YPqYXQRycY5N6AF5iI4mWWwbS4opybp3IKpDDktu11p/YEEO > Fj9wnSCx3nLMXB/XZgSjjmnaluGhYNdh3JFtz83Vmr50qXTG/TuIXlkirP73GfGvIt2S > 7sy8/YdEbZR92mYe9wucditYr7MOuyjyYrZYCg+weXeTZiJX1PfBVieD+kD9MnCairnP > rQeQ== Received: by 10.14.215.197 with SMTP id e45mr52766237eep.0.1355164974784; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) MIME-Version: 1.0 Return-Path: <> Received: by 10.14.215.197 with SMTP id > e45mr65950764eep.0; Mon, 10 Dec 2012 10:42:54 -0800 (PST) From: Mail > Delivery Subsystem <[email protected]> To: > [email protected] X-Failed-Recipients: ********@hotmail.fr > Subject: Delivery Status Notification (Failure) Message-ID: > <[email protected]> Date: Mon, 10 Dec 2012 > 18:42:54 +0000 Content-Type: text/plain; charset=ISO-8859-1 > > Delivery to the following recipient failed permanently: > > *********@hotmail.fr > > Technical details of permanent failure: Message rejected. See > http://support.google.com/mail/bin/answer.py?answer=69585 for more > information. > > ----- Original message ----- > > Received: by 10.14.215.197 with SMTP id > e45mr52766228eep.0.1355164974700; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Return-Path: <[email protected]> Received: from WIN-1BBHFMJGUGS ([91.93.102.12]) > by mx.google.com with ESMTPS id l3sm26787704eem.14.2012.12.10.10.42.53 > (version=TLSv1/SSLv3 cipher=OTHER); > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Message-ID: <[email protected]> MIME-Version: 1.0 > From: [email protected] To: *********@hotmail.fr Date: Mon, 10 Dec > 2012 10:42:54 -0800 (PST) Subject: Rejoignez-nous sur achbanlek.com > Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: > base64 > > ----- End of message -----

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >