Search Results

Search found 5019 results on 201 pages for 'jakarta commons logging'.

Page 191/201 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • C# Windows Service XML

    - by Goober
    Scenario I have a windows service written in C# that performs some processing based on parsing an XML file and use that data to carry out various tasks. The service also does various bits of logging - which uses settings from an APP.Config file. The Problem When the service is compiled, installed and run, the XML file seems to disappear. I'm getting the impression that it is just ignored or something like that. So far I've tried using TWO App.Config files, one named App.Config that contains settings for the service, and the other called MyService.exe.config that contains all of the data that was used in the XML file (the idea being that I can parse the XML from a config file that actually gets compiled and appears in my installation directory. However When I do this, all that happens is that ONE config file appears (with the name MyService.exe.config), but it contains the contents of the App.Config file and not the XML data that I want to parse. What I need All I want is to have a config file for my settings, and an XML file for my data. Question Is this possible? I know the application works as it was originally built as a console application that ran fine. Other The application has to be designed this way (as in, I need my data stored as XML, and my settings stored in a config file). Thoughts If I could somehow combine the contents of the two files into ONE config file, that would be one way of solving the problem. However, I have tried this and of course I get a "Type Initialisation Exception", as the config file cannot interprate the XML data (probably because the tags are custom and do not form any part of the config schema - or something like that). Ideas Please could someone explain to me if it is possible for me to have an XML file AND a config file that will actually be compiled and stored in my installation directory for the service when it is run? CODE Custom XML/Data Config File <?xml version="1.0" encoding="utf-8" ?> <configuration> <servers> <SV066930> <add name="Name" value = "SV066930" /> <processes> <SimonTest1> <add name="ProcessName" value="notepad.exe" /> <add name="CommandLine" value="C:\\WINDOWS\\system32\\notepad.exe C:\\WINDOWS\\Profiles\\TA2TOF1\\Desktop\\SimonTest1.txt" /> </SimonTest1> </processes> </SV066930> </servers> </configuration> APP.Config Settings File <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxx" /> </configSections> <connectionStrings> <add name="DB" connectionString="Data Source=etc......" /> </connectionStrings> </configuration> Help greatly appreciated.

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • Code Access Security and Sharepoint WebParts

    - by Gordon Carpenter-Thompson
    I've got a vague handle on how Code Access Security works in Sharepoint. I have developed a custom webpart and setup a CAS policy in my Manifest <CodeAccessSecurity> <PolicyItem> <PermissionSet class="NamedPermissionSet" version="1" Description="Permission set for Okana"> <IPermission class="Microsoft.SharePoint.Security.SharePointPermission, Microsoft.SharePoint.Security, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" version="1" ObjectModel="True" Impersonate="True" /> <IPermission class="SecurityPermission" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration" /> <IPermission class="AspNetHostingPermission" version="1" Level="Medium" /> <IPermission class="DnsPermission" version="1" Unrestricted="true" /> <IPermission class="EventLogPermission" version="1" Unrestricted="true"> <Machine name="localhost" access="Administer" /> </IPermission> <IPermission class="EnvironmentPermission" version="1" Unrestricted="true" /> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true" /> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" Unrestricted="true" /> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true" PathDiscovery="*AllFiles*" /> <IPermission class="IsolatedStorageFilePermission" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807" /> <IPermission class="PrintingPermission" version="1" Level="DefaultPrinting" /> <IPermission class="PerformanceCounterPermission" version="1"> <Machine name="localhost"> <Category name="Enterprise Library Caching Counters" access="Write"/> <Category name="Enterprise Library Cryptography Counters" access="Write"/> <Category name="Enterprise Library Data Counters" access="Write"/> <Category name="Enterprise Library Exception Handling Counters" access="Write"/> <Category name="Enterprise Library Logging Counters" access="Write"/> <Category name="Enterprise Library Security Counters" access="Write"/> </Machine> </IPermission> <IPermission class="ReflectionPermission" version="1" Unrestricted="true"/> <IPermission class="SecurityPermission" version="1" Flags="SerializationFormatter, UnmanagedCode, Infrastructure, Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration, ControlAppDomain,ControlDomainPolicy" /> <IPermission class="SharePointPermission" version="1" ObjectModel="True" /> <IPermission class="SmtpPermission" version="1" Access="Connect" /> <IPermission class="SqlClientPermission" version="1" Unrestricted="true"/> <IPermission class="WebPartPermission" version="1" Connections="True" /> <IPermission class="WebPermission" version="1"> <ConnectAccess> <URI uri="$OriginHost$"/> </ConnectAccess> </IPermission> </PermissionSet> <Assemblies> .... </Assemblies> This is correctly converted into a wss_custom_wss_minimaltrust.config when it is deployed onto the Sharepoint server and mostly works. To get the WebPart working fully, however I find that I need to modify the wss_custom_wss_minimaltrust.config by hand after deployment and set Unrestricted="true" on the permissions set <PermissionSet class="NamedPermissionSet" version="1" Description="Permission set for MyApp" Name="mywebparts.wsp-86d8cae1-7db2-4057-8c17-dc551adb17a2-1"> to <PermissionSet class="NamedPermissionSet" version="1" Description="Permission set for MyApp" Name="mywebparts.wsp-86d8cae1-7db2-4057-8c17-dc551adb17a2-1" Unrestricted="true"> It's all because I'm loading a User Control from the webpart. I don't believe there is a way to enable that using CAS but am willing to be proven wrong. Is there a way to set something in the manifest so I don't need to make this fix by hand? Thanks

    Read the article

  • Mac won't boot into safe mode

    - by Stephen
    Mac boots fine normally, except when in safe mode. Holding down shift when booting gets me to the progress bar on the grey screen. Progress bar gets about half way before mac reboots. I modified nvram boot-args to get a better look: sudo nvram boot-args="-x -v" It definitely gets through fsck, skips loading kernel extensions (since it's in safe mode), does something with the network interfaces, then this is the last thing it wips through... Aug 22 11:56:21 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'com.apple.ServiceManagement.daemons.modify' by client '/usr/libexec/UserEventAgent' [10] for authorization created by '/usr/libexec/UserEventAgent' [10] (100012,0) Aug 22 11:56:22 Crockpot fseventsd[37]: event logs in /.fseventsd out of sync with volume. destroying old logs. (1 174 330) Aug 22 11:56:22 Crockpot fseventsd[37]: log dir: /.fseventsd getting new uuid: 5C379650-26FA-428F-B81F-4FE4349D50B3 Aug 22 11:56:23 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) starting OSXVers 12 Aug 22 11:56:23 Crockpot systemkeychain[35]: done file: /var/run/systemkeychaincheck.done Aug 22 11:56:23 Crockpot configd[17]: network changed: DNS* Aug 22 11:56:24 --- last message repeated 1 time --- Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2D_IPC: Loaded Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2DInitialize succeeded Aug 22 11:56:24 Crockpot mDNSResponder[39]: Adding registration domain 273025955.members.btmm.icloud.com. Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe MAC AUTH succeeded Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe Unsolicited Auth Aug 22 11:56:24 Crockpot kernel[0]: wlEvent: en1 en1 Link UP virtIf = 0 Aug 22 11:56:24 Crockpot kernel[0]: AirPort: Link Up on en1 Aug 22 11:56:24 Crockpot kernel[0]: en1: BSSID changed to 00:23:69:35:dc:fe Aug 22 11:56:24 Crockpot kernel[0]: en1::IO80211Interface::postMessage bssid changed Aug 22 11:56:24 Crockpot kernel[0]: AirPort: RSN handshake complete on en1 Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences failed to read preferences data. Errno was 21 Aug 22 11:56:25 --- last message repeated 1 time --- Aug 22 11:56:25 Crockpot airportd[30]: _doAutoJoin: Already associated to “burnum”. Bailing on auto-join. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Can't load IOBluetoothSerialManager.kext - ineligible during safe boot. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Load com.apple.iokit.IOBluetoothSerialManager failed; removing personalities from kernel. Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences: error renaming file blued.plist.HXuEmQn to blued.plist. Aug 22 11:56:27 Crockpot awacsd[52]: Starting awacsd connectivity-77 (Jun 20 2012 15:40:49) Aug 22 11:56:27 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'system.services.systemconfiguration.network' by client '/System/Library/Frameworks/SystemConfiguration.framework/Versions/A/Resources/SCHelper' [54] for authorization created by '/usr/sbin/awacsd' [52] (100003,0) Aug 22 11:56:27 --- last message repeated 1 time --- Aug 22 11:56:27 Crockpot awacsd[52]: Configuring lazy AWACS client: 273025955.p04.members.btmm.icloud.com. Aug 22 11:56:28 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:32 --- last message repeated 1 time --- Aug 22 11:56:32 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:38 --- last message repeated 1 time --- Aug 22 11:56:38 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:47 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:49 Crockpot configd[17]: subnet_route: write routing socket failed, Network is unreachable Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1+:169.254.80.161) DNS* Proxy+ SMB Aug 22 11:56:51 Crockpot UserEventAgent[10]: Captive: en1: Not probing 'burnum' (protected network) Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1:169.254.80.161) DNS Proxy SMB Aug 22 11:57:07 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:57:23 Crockpot fseventsd[37]: Logging disabled completely for device:1: /Volumes/Recovery HD Aug 22 11:57:25 Crockpot kernel[0]: Kext loading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kext unloading now disabled. Aug 22 11:57:25 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) stopping Aug 22 11:57:25 Crockpot com.apple.SecurityServer[15]: Killing auth hosts Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot configd[17]: dnssd_clientstub read_all(26) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: [0x7fb025119ff0] SCNetworkReachability _llq_callback w/error=-65563 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2D_IPC: Terminated Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2DTerminate succeeded Aug 22 11:57:25 Crockpot awacsd[52]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 2 times --- Aug 22 11:57:25 Crockpot apsd[55]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: SCNC: stop, triggered by configd, type PPPSerial, reason Terminated All Aug 22 11:57:25 Crockpot configd[17]: _d2dCallback: D2D connection to mDNSResponder lost Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 4 times --- Aug 22 11:57:25 Crockpot kernel[0]: Kext autounloading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kernel requests now disabled. ... before rebooting in the middle of the safe mode startup sequence. Aug 22 12:01:10 localhost bootlog[0]: BOOT_TIME 1345662070 0 Aug 22 12:01:32 localhost kernel[0]: PMAP: PCID enabled Aug 22 12:01:32 localhost kernel[0]: Darwin Kernel Version 12.0.0: Sun Jun 24 23:00:16 PDT 2012; root:xnu-2050.7.9~1/RELEASE_X86_64 Any ideas what's causing the safe mode boot to fail? System Info MacBook Pro 8,2 2.2 Ghz Core i7 4 GM Ram Mountain Lion 10.8 500GB TOSHIBA MK5065GSXF Serial-ATA rotational disk

    Read the article

  • Tomcat 7 on Ubuntu 12.04 with JRE 7 not starting

    - by Andreas Krueger
    I am running a virtual server in the web on Ubuntu 12.04 LTS / 32 Bit. After a clean install of JRE 7 and Tomcat 7, following the instructions on http://www.sysadminslife.com, I don't get Tomcat 7 up and running. > java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) Client VM (build 23.5-b02, mixed mode) > /etc/init.d/tomcat start Starting Tomcat Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr/lib/jvm/java-7-oracle Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar > telnet localhost 8080 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused netstat sometimes shows a Java process, most of the times not. If it does, nothing works either. Does anyone have a solution or encountered similar situations? Here are the contents of catalina.out: 16.11.2012 18:36:39 org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-6-oracle/lib/i386/client:/usr/lib/jvm/java-6-oracle/lib/i386:/usr/lib/jvm/java-6-oracle/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] 16.11.2012 18:36:40 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 1509 ms 16.11.2012 18:36:40 org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina 16.11.2012 18:36:40 org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.29 16.11.2012 18:36:40 org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /usr/local/tomcat/webapps/manager Here come the results of ps -ef, iptables --list and netstat -plut: > ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Nov16 ? 00:00:00 init root 2 1 0 Nov16 ? 00:00:00 [kthreadd/206616] root 3 2 0 Nov16 ? 00:00:00 [khelper/2066167] root 4 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 5 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 6 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 7 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 8 2 0 Nov16 ? 00:00:00 [nfsiod/2066167] root 119 1 0 Nov16 ? 00:00:00 upstart-udev-bridge --daemon root 125 1 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 157 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 158 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 205 1 0 Nov16 ? 00:00:00 upstart-socket-bridge --daemon root 276 1 0 Nov16 ? 00:00:00 /usr/sbin/sshd -D root 335 1 0 Nov16 ? 00:00:00 /usr/sbin/xinetd -dontfork -pidfile /var/run/xinetd.pid -stayalive -inetd root 348 1 0 Nov16 ? 00:00:00 cron syslog 368 1 0 Nov16 ? 00:00:00 /sbin/syslogd -u syslog root 472 1 0 Nov16 ? 00:00:00 /usr/lib/postfix/master postfix 482 472 0 Nov16 ? 00:00:00 qmgr -l -t fifo -u root 520 1 0 Nov16 ? 00:00:04 /usr/sbin/apache2 -k start www-data 523 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 525 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 526 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start tomcat 1074 1 0 Nov16 ? 00:01:08 /usr/lib/jvm/java-6-oracle/bin/java -Djava.util.logging.config.file=/usr/ postfix 1351 472 0 Nov16 ? 00:00:00 tlsmgr -l -t unix -u -c postfix 3413 472 0 17:00 ? 00:00:00 pickup -l -t fifo -u -c root 3457 276 0 17:31 ? 00:00:00 sshd: root@pts/0 root 3459 3457 0 17:31 pts/0 00:00:00 -bash root 3470 3459 0 17:31 pts/0 00:00:00 ps -ef > iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:8005 ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination > netstat -plut Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:smtp *:* LISTEN 472/master tcp 0 0 *:3213 *:* LISTEN 276/sshd tcp6 0 0 [::]:smtp [::]:* LISTEN 472/master tcp6 0 0 [::]:8009 [::]:* LISTEN 1074/java tcp6 0 0 [::]:3213 [::]:* LISTEN 276/sshd tcp6 0 0 [::]:http-alt [::]:* LISTEN 1074/java tcp6 0 0 [::]:http [::]:* LISTEN 520/apache2

    Read the article

  • My server app works strangely. What could be the reason(s)?

    - by Poni
    Hi! I've written a server app (two parts actually; proxy server and a game server) using C++ (board game). It uses IOCP as the sockets interface. For that app I've also written a "client simulator" (hereafter "client") app that spawns many client connections, where each of them plays, in very high speed, getting the CPU to be 100% utilized. So, that's how it goes in terms of topology: Game server - holds the game state. Real players do not connect it directly but through the proxy server. When a player joins a game, the proxy actually asks for it on behalf of that player, and the game server spawns a "player instance" for that player, and from now on, every notification between the game server and the player is being passed through the proxy. Proxy server - holds TCP connections with the real players. Players communicate with the game server through it only. Client simulator - connects to the proxy only. When running the server (again, it's actually two server apps) & client locally it all works just fine. I'm talking about 40k+ player instances in which all of them are active in a game. On the other hand, when running the server remotely with, say, 1000 clients who play things getting strange. For example, I run it as said above. Then with Task Manager I kill the client simulator app ("End Process Tree"). Then it seems like the buffer of the remote server got modified by another thread, or in other words, a memory corruption has been occurred. The server crashes because it got an unknown message id (it's a custom protocol where each message has it's own unique number). To make things clear, here is how I run the apps: PC1 - game server and clients simulator (because the clients will connect the proxy). PC2 - proxy server. The strangest thing is this: Only the remote side gets "corrupted". Remote in terms that it's not the PC I use to code the app (VC++ 2008). Let's call the PC I use to code the apps "PC1". Now for example, if this time I ran the game server on PC1 (it means that proxy server on PC2 and clients simulator on PC1), then the proxy server crashes with an "unknown message id" error. Another variation is when I run the proxy server on PC1 (again, the dev machine), the game server and the clients simulator on PC2, then the game server on PC2 gets crashed. As for the IOCP config: The servers' internal connections use the default receive/send buffer sizes. Tried even with setting them to 1MB, but no luck. I have three PCs in total; 2 x Vista 64bit <<-- one of those is the dev machine. The other is connected through WiFi. 1 x WinXP 32bit They're all connected in a "full duplex" manner. What could be the reason? Tried about everything; Stack tracing, recording some actions (like read/write logging).. I want to stress that only the PC I'm not using to code the apps crashes (actually the server app "role" which is running on it - sometimes the game server and sometimes the proxy server). At first I thought that maybe the wireless PC has problems (it's wireless..) but: TCP has it's own mechanisms to make sure the packet is delivered properly. Also, a crash also happens when trying it with the two PCs that are physically connected (Vista vs. XP). Another option is that the Windows DLLs versions might have problems, but then again, one of the tests is Vista vs. Vista, and the other is Vista vs. XP. Any idea?

    Read the article

  • Spring Security and the Synchronizer Token J2EE pattern, problem when authentication fails.

    - by dfuse
    Hey, we are using Spring Security 2.0.4. We have a TransactionTokenBean which generates a unique token each POST, the bean is session scoped. The token is used for the duplicate form submission problem (and security). The TransactionTokenBean is called from a Servlet filter. Our problem is the following, after a session timeout occured, when you do a POST in the application Spring Security redirects to the logon page, saving the original request. After logging on again the TransactionTokenBean is created again, since it is session scoped, but then Spring forwards to the originally accessed url, also sending the token that was generated at that time. Since the TransactionTokenBean is created again, the tokens do not match and our filter throws an Exception. I don't quite know how to handle this elegantly, (or for that matter, I can't even fix it with a hack), any ideas? This is the code of the TransactionTokenBean: public class TransactionTokenBean implements Serializable { public static final int TOKEN_LENGTH = 8; private RandomizerBean randomizer; private transient Logger logger; private String expectedToken; public String getUniqueToken() { return expectedToken; } public void init() { resetUniqueToken(); } public final void verifyAndResetUniqueToken(String actualToken) { verifyUniqueToken(actualToken); resetUniqueToken(); } public void resetUniqueToken() { expectedToken = randomizer.getRandomString(TOKEN_LENGTH, RandomizerBean.ALPHANUMERICS); getLogger().debug("reset token to: " + expectedToken); } public void verifyUniqueToken(String actualToken) { if (getLogger().isDebugEnabled()) { getLogger().debug("verifying token. expected=" + expectedToken + ", actual=" + actualToken); } if (expectedToken == null || actualToken == null || !isValidToken(actualToken)) { throw new IllegalArgumentException("missing or invalid transaction token"); } if (!expectedToken.equals(actualToken)) { throw new InvalidTokenException(); } } private boolean isValidToken(String actualToken) { return StringUtils.isAlphanumeric(actualToken); } public void setRandomizer(RandomizerBean randomizer) { this.randomizer = randomizer; } private Logger getLogger() { if (logger == null) { logger = Logger.getLogger(TransactionTokenBean.class); } return logger; } } and this is the Servlet filter (ignore the Ajax stuff): public class SecurityFilter implements Filter { static final String AJAX_TOKEN_PARAM = "ATXTOKEN"; static final String TOKEN_PARAM = "TXTOKEN"; private WebApplicationContext webApplicationContext; private Logger logger = Logger.getLogger(SecurityFilter.class); public void init(FilterConfig config) { setWebApplicationContext(WebApplicationContextUtils.getWebApplicationContext(config.getServletContext())); } public void destroy() { } public void doFilter(ServletRequest req, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; if (isPostRequest(request)) { if (isAjaxRequest(request)) { log("verifying token for AJAX request " + request.getRequestURI()); getTransactionTokenBean(true).verifyUniqueToken(request.getParameter(AJAX_TOKEN_PARAM)); } else { log("verifying and resetting token for non-AJAX request " + request.getRequestURI()); getTransactionTokenBean(false).verifyAndResetUniqueToken(request.getParameter(TOKEN_PARAM)); } } chain.doFilter(request, response); } private void log(String line) { if (logger.isDebugEnabled()) { logger.debug(line); } } private boolean isPostRequest(HttpServletRequest request) { return "POST".equals(request.getMethod().toUpperCase()); } private boolean isAjaxRequest(HttpServletRequest request) { return request.getParameter("AJAXREQUEST") != null; } private TransactionTokenBean getTransactionTokenBean(boolean ajax) { return (TransactionTokenBean) webApplicationContext.getBean(ajax ? "ajaxTransactionTokenBean" : "transactionTokenBean"); } void setWebApplicationContext(WebApplicationContext context) { this.webApplicationContext = context; } }

    Read the article

  • Can't access shared drive when connecting over VPN

    - by evolvd
    I can ping all network devices but it doesn't seem that DNS is resolving their hostnames. ipconfig/ all is showing that I am pointing to the correct dns server. I can "ping "dnsname"" and it will resolve but it wont resolve any other names. Split tunnel is set up so outside DNS is resolving fine So one issue might be DNS but I have the IP address of the server share so I figure I could just get to it that way. example: \10.0.0.1\ well I can't get to it that way either and I get "the specified network name is no longer available" I can ping it but I can't open the share. Below is the ASA config : ASA Version 8.2(1) ! hostname KG-ASA domain-name example.com names ! interface Vlan1 nameif inside security-level 100 ip address 10.0.0.253 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address dhcp setroute ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns domain-lookup outside dns server-group DefaultDNS name-server 10.0.0.101 domain-name blah.com access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 10000 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 8333 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 902 access-list SPLIT-TUNNEL-VPN standard permit 10.0.0.0 255.0.0.0 access-list NONAT extended permit ip 10.0.0.0 255.255.255.0 10.0.1.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool IPSECVPN-POOL 10.0.1.2-10.0.1.50 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-621.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list NONAT nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 10000 10.0.0.101 10000 netmask 255.255.255.255 static (inside,outside) tcp interface 8333 10.0.0.101 8333 netmask 255.255.255.255 static (inside,outside) tcp interface 902 10.0.0.101 902 netmask 255.255.255.255 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authentication enable console LOCAL aaa authentication http console LOCAL aaa authentication serial console LOCAL aaa authentication ssh console LOCAL aaa authentication telnet console LOCAL http server enable http 10.0.0.0 255.255.0.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set myset esp-aes esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map dynmap 1 set transform-set myset crypto dynamic-map dynmap 1 set reverse-route crypto map IPSEC-MAP 65535 ipsec-isakmp dynamic dynmap crypto map IPSEC-MAP interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 65535 authentication pre-share encryption aes hash sha group 2 lifetime 86400 telnet 0.0.0.0 0.0.0.0 inside telnet timeout 5 ssh 0.0.0.0 0.0.0.0 inside ssh 70.60.228.0 255.255.255.0 outside ssh 74.102.150.0 255.255.254.0 outside ssh 74.122.164.0 255.255.252.0 outside ssh timeout 5 console timeout 0 dhcpd dns 10.0.0.101 dhcpd lease 7200 dhcpd domain blah.com ! dhcpd address 10.0.0.110-10.0.0.170 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 63.111.165.21 webvpn enable outside svc image disk0:/anyconnect-win-2.4.1012-k9.pkg 1 svc enable group-policy EASYVPN internal group-policy EASYVPN attributes dns-server value 10.0.0.101 vpn-tunnel-protocol IPSec l2tp-ipsec svc webvpn split-tunnel-policy tunnelspecified split-tunnel-network-list value SPLIT-TUNNEL-VPN ! tunnel-group client type remote-access tunnel-group client general-attributes address-pool (inside) IPSECVPN-POOL address-pool IPSECVPN-POOL default-group-policy EASYVPN dhcp-server 10.0.0.253 tunnel-group client ipsec-attributes pre-shared-key * tunnel-group CLIENTVPN type ipsec-l2l tunnel-group CLIENTVPN ipsec-attributes pre-shared-key * ! class-map inspection_default match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect icmp ! service-policy global_policy global prompt hostname context I'm not sure where I should go next with troubleshooting nslookup result: Default Server: blahname.blah.lan Address: 10.0.0.101

    Read the article

  • Ado.net Fill method not throwing error on running a Stored Procedure that does not exist.

    - by Mike
    I am using a combination of the Enterprise library and the original Fill method of ADO. This is because I need to open and close the command connection myself as I am capture the event Info Message Here is my code so far // Set Up Command SqlDatabase db = new SqlDatabase(ConfigurationManager.ConnectionStrings[ConnectionName].ConnectionString); SqlCommand command = db.GetStoredProcCommand(StoredProcName) as SqlCommand; command.Connection = db.CreateConnection() as SqlConnection; // Set Up Events for Logging command.StatementCompleted += new StatementCompletedEventHandler(command_StatementCompleted); command.Connection.FireInfoMessageEventOnUserErrors = true; command.Connection.InfoMessage += new SqlInfoMessageEventHandler(Connection_InfoMessage); // Add Parameters foreach (Parameter parameter in Parameters) { db.AddInParameter(command, parameter.Name, (System.Data.DbType)Enum.Parse(typeof(System.Data.DbType), parameter.Type), parameter.Value); } // Use the Old Style fill to keep the connection Open througout the population // and manage the Statement Complete and InfoMessage events SqlDataAdapter da = new SqlDataAdapter(command); DataSet ds = new DataSet(); // Open Connection command.Connection.Open(); // Populate da.Fill(ds); // Dispose of the adapter if (da != null) { da.Dispose(); } // If you do not explicitly close the connection here, it will leak! if (command.Connection.State == ConnectionState.Open) { command.Connection.Close(); } ... Now if I pass into the variable StoredProcName = "ThisProcDoesNotExists" And run this peice of code. The CreateCommand nor da.Fill through an error message. Why is this. The only way I can tell it did not run was that it returns a dataset with 0 tables in it. But when investigating the error it is not appearant that the procedure does not exist. EDIT Upon further investigation command.Connection.FireInfoMessageEventOnUserErrors = true; is causeing the error to be surpressed into the InfoMessage Event From BOL When you set FireInfoMessageEventOnUserErrors to true, errors that were previously treated as exceptions are now handled as InfoMessage events. All events fire immediately and are handled by the event handler. If is FireInfoMessageEventOnUserErrors is set to false, then InfoMessage events are handled at the end of the procedure. What I want is each print statement from Sql to create a new log record. Setting this property to false combines it as one big string. So if I leave the property set to true, now the question is can I discern a print message from an Error ANOTHER EDIT So now I have the code so that the flag is set to true and checking the error number in the method void Connection_InfoMessage(object sender, SqlInfoMessageEventArgs e) { // These are not really errors unless the Number >0 // if Number = 0 that is a print message foreach (SqlError sql in e.Errors) { if (sql.Number == 0) { Logger.WriteInfo("Sql Message",sql.Message); } else { // Whatever this was it was an error throw new DataException(String.Format("Message={0},Line={1},Number={2},State{3}", sql.Message, sql.LineNumber, sql.Number, sql.State)); } } } The issue now that when I throw the error it does not bubble up to the statement that made the call or even the error handler that is above that. It just bombs out on that line The populate looks like // Populate try { da.Fill(ds); } catch (Exception e) { throw new Exception(e.Message, e); } Now even though I see the calling codes and methods still in the Call Stack, this exception does not seem to bubble up?

    Read the article

  • Mysql 100% CPU + Slow query

    - by felipeclopes
    I'm using the RDS database from amazon with a some very big tables, and yesterday I started to face 100% CPU utilisation on the server and a bunch of slow query logs that were not happening before. I tried to check the queries that were running and faced this result from the explain command +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | businesses | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index; Using temporary; Using filesort | | 1 | SIMPLE | activities_businesses | ref | PRIMARY,index_activities_users_on_business_id,index_tweets_users_on_tweet_id_and_business_id | index_activities_users_on_business_id | 9 | const | 2252 | Using index condition; Using where | | 1 | SIMPLE | activities_b_taggings_975e9c4 | ref | taggings_idx | taggings_idx | 782 | const,myapp_production.activities_businesses.id,const | 1 | Using index condition; Using where | | 1 | SIMPLE | activities | eq_ref | PRIMARY,index_activities_on_created_at | PRIMARY | 8 | myapp_production.activities_businesses.activity_id | 1 | Using where | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ Also checkin in the process list, I got something like this: +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | 1 | my_app | my_ip:57152 | my_app_production | Sleep | 0 | | NULL | | 2 | my_app | my_ip:57153 | my_app_production | Sleep | 2 | | NULL | | 3 | rdsadmin | localhost:49441 | NULL | Sleep | 9 | | NULL | | 6 | my_app | my_other_ip:47802 | my_app_production | Sleep | 242 | | NULL | | 7 | my_app | my_other_ip:47807 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 8 | my_app | my_other_ip:47809 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 9 | my_app | my_other_ip:47810 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 10 | my_app | my_other_ip:47811 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 11 | my_app | my_other_ip:47813 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | ... So based on the numbers, it looks like there is no reason to have a slow query, since the worst execution plan is the one that goes through 2k rows which is not much. Edit 1 Another information that might be useful is the slow query_log SET timestamp=1401457485; SELECT my_query... # User@Host: myapp[myapp] @ ip-10-195-55-233.ec2.internal [IP] Id: 435 # Query_time: 95.830497 Lock_time: 0.000178 Rows_sent: 0 Rows_examined: 1129387 Edit 2 After profiling, I got this result. The result have approximately 250 rows with two columns each. +----------------------+----------+ | state | duration | +----------------------+----------+ | Sending data | 272 | | removing tmp table | 0 | | optimizing | 0 | | Creating sort index | 0 | | init | 0 | | cleaning up | 0 | | executing | 0 | | checking permissions | 0 | | freeing items | 0 | | Creating tmp table | 0 | | query end | 0 | | statistics | 0 | | end | 0 | | System lock | 0 | | Opening tables | 0 | | logging slow query | 0 | | Sorting result | 0 | | starting | 0 | | closing tables | 0 | | preparing | 0 | +----------------------+----------+ Edit 3 Adding query as requested SELECT activities.share_count, activities.created_at FROM `activities_businesses` INNER JOIN `businesses` ON `businesses`.`id` = `activities_businesses`.`business_id` INNER JOIN `activities` ON `activities`.`id` = `activities_businesses`.`activity_id` JOIN taggings activities_b_taggings_975e9c4 ON activities_b_taggings_975e9c4.taggable_id = activities_businesses.id AND activities_b_taggings_975e9c4.taggable_type = 'ActivitiesBusiness' AND activities_b_taggings_975e9c4.tag_id = 104 AND activities_b_taggings_975e9c4.created_at >= '2014-04-30 13:36:44' WHERE ( businesses.id = 1 ) AND ( activities.created_at > '2014-04-30 13:36:44' ) AND ( activities.created_at < '2014-05-30 12:27:03' ) ORDER BY activities.created_at; Edit 4 There may be a chance that the indexes are not being applied due to difference in column type between the taggings and the activities_businesses, on the taggable_id column. mysql> SHOW COLUMNS FROM activities_businesses; +-------------+------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | activity_id | bigint(20) | YES | MUL | NULL | | | business_id | bigint(20) | YES | MUL | NULL | | +-------------+------------+------+-----+---------+----------------+ 3 rows in set (0.01 sec) mysql> SHOW COLUMNS FROM taggings; +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | tag_id | int(11) | YES | MUL | NULL | | | taggable_id | bigint(20) | YES | | NULL | | | taggable_type | varchar(255) | YES | | NULL | | | tagger_id | int(11) | YES | | NULL | | | tagger_type | varchar(255) | YES | | NULL | | | context | varchar(128) | YES | | NULL | | | created_at | datetime | YES | | NULL | | +---------------+--------------+------+-----+---------+----------------+ So it is examining way more rows than it shows in the explain query, probably because some indexes are not being applied. Do you guys can help m with that?

    Read the article

  • Cisco PIX 515 doesn't seem to be passing traffic through according to static route

    - by Liquidkristal
    Ok, so I am having a spot of bother with a Cisco PIX515, I have posted the current running config below, now I am no cisco expert by any means although I can do basic stuff with them, now I am having trouble with traffic sent from the outside to address: 10.75.32.25 it just doesn't appear to be going anywhere. Now this firewall is deep inside a private network, with an upstream firewall that we don't manage. I have spoken to the people that look after that firewall and they say they they have traffic routing to 10.75.32.21 and 10.75.32.25 and thats it (although there is a website that runs from the server 172.16.102.5 which (if my understanding is correct) gets traffic via 10.75.32.23. Any ideas would be greatly appreciated as to me it should all just work, but its not (obviously if the config is all correct then there could be a problem with the web server that we are trying to access on 10.75.32.25, although the users say that they can get to it internally (172.16.102.8) which is even more confusing) PIX Version 6.3(3) interface ethernet0 auto interface ethernet1 auto interface ethernet2 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 nameif ethernet2 academic security50 fixup protocol dns maximum-length 512 fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names name 195.157.180.168 outsideNET name 195.157.180.170 globalNAT name 195.157.180.174 gateway name 195.157.180.173 Mail-Global name 172.30.31.240 Mail-Local name 10.75.32.20 outsideIF name 82.219.210.17 frogman1 name 212.69.230.79 frogman2 name 78.105.118.9 frogman3 name 172.16.0.0 acadNET name 172.16.100.254 acadIF access-list acl_outside permit icmp any any echo-reply access-list acl_outside permit icmp any any unreachable access-list acl_outside permit icmp any any time-exceeded access-list acl_outside permit tcp any host 10.75.32.22 eq smtp access-list acl_outside permit tcp any host 10.75.32.22 eq 8383 access-list acl_outside permit tcp any host 10.75.32.22 eq 8385 access-list acl_outside permit tcp any host 10.75.32.22 eq 8484 access-list acl_outside permit tcp any host 10.75.32.22 eq 8485 access-list acl_outside permit ip any host 10.75.32.30 access-list acl_outside permit tcp any host 10.75.32.25 eq https access-list acl_outside permit tcp any host 10.75.32.25 eq www access-list acl_outside permit tcp any host 10.75.32.23 eq www access-list acl_outside permit tcp any host 10.75.32.23 eq https access-list acl_outside permit tcp host frogman1 host 10.75.32.23 eq ssh access-list acl_outside permit tcp host frogman2 host 10.75.32.23 eq ssh access-list acl_outside permit tcp host frogman3 host 10.75.32.23 eq ssh access-list acl_outside permit tcp any host 10.75.32.23 eq 2001 access-list acl_outside permit tcp host frogman1 host 10.75.32.24 eq 8441 access-list acl_outside permit tcp host frogman2 host 10.75.32.24 eq 8441 access-list acl_outside permit tcp host frogman3 host 10.75.32.24 eq 8441 access-list acl_outside permit tcp host frogman1 host 10.75.32.24 eq 8442 access-list acl_outside permit tcp host frogman2 host 10.75.32.24 eq 8442 access-list acl_outside permit tcp host frogman3 host 10.75.32.24 eq 8442 access-list acl_outside permit tcp host frogman1 host 10.75.32.24 eq 8443 access-list acl_outside permit tcp host frogman2 host 10.75.32.24 eq 8443 access-list acl_outside permit tcp host frogman3 host 10.75.32.24 eq 8443 access-list acl_outside permit tcp any host 10.75.32.23 eq smtp access-list acl_outside permit tcp any host 10.75.32.23 eq ssh access-list acl_outside permit tcp any host 10.75.32.24 eq ssh access-list acl_acad permit icmp any any echo-reply access-list acl_acad permit icmp any any unreachable access-list acl_acad permit icmp any any time-exceeded access-list acl_acad permit tcp any 10.0.0.0 255.0.0.0 eq www access-list acl_acad deny tcp any any eq www access-list acl_acad permit tcp any 10.0.0.0 255.0.0.0 eq https access-list acl_acad permit tcp any 10.0.0.0 255.0.0.0 eq 8080 access-list acl_acad permit tcp host 172.16.102.5 host 10.64.1.115 eq smtp pager lines 24 logging console debugging mtu outside 1500 mtu inside 1500 mtu academic 1500 ip address outside outsideIF 255.255.252.0 no ip address inside ip address academic acadIF 255.255.0.0 ip audit info action alarm ip audit attack action alarm pdm history enable arp timeout 14400 global (outside) 1 10.75.32.21 nat (academic) 1 acadNET 255.255.0.0 0 0 static (academic,outside) 10.75.32.22 Mail-Local netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.30 172.30.30.36 netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.23 172.16.102.5 netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.24 172.16.102.6 netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.25 172.16.102.8 netmask 255.255.255.255 0 0 access-group acl_outside in interface outside access-group acl_acad in interface academic route outside 0.0.0.0 0.0.0.0 10.75.32.1 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute aaa-server TACACS+ protocol tacacs+ aaa-server RADIUS protocol radius aaa-server LOCAL protocol local snmp-server host outside 172.31.10.153 snmp-server host outside 172.31.10.154 snmp-server host outside 172.31.10.155 no snmp-server location no snmp-server contact snmp-server community CPQ_HHS no snmp-server enable traps floodguard enable telnet 172.30.31.0 255.255.255.0 academic telnet timeout 5 ssh timeout 5 console timeout 0 terminal width 120 Cryptochecksum:hi2u : end PIX515#

    Read the article

  • Nginx no static files after update

    - by SomeoneS
    First, i must say that i am not expert in server administration, my site was setup by hosting admins (that i cannot contact anymore). Few days ago, i updated Nginx to latest version (admin told me that it is safe to do). But after that, my site serves only html content, no CSS, images, JS. If i try to open some image i get message "Wellcome to Nginx" (same thin if i try to open static.mysitedomain.com). More details: Site has static. subdomain, but static files are in same directory as they used to be before setting up static files. I was googling for some solutions, i tried to change something in /etc/nginx/, but no luck. I feel that this is some minor configuration problem, any ideas? EDIT: Here is /etc/nginx/nginx.conf file content: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Here is /etc/nginx/sites-enabled/default file content: server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ /index.html; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ /index.html; # } #}

    Read the article

  • C++ template function specialization using TCHAR on Visual Studio 2005

    - by Eli
    I'm writing a logging class that uses a templatized operator<< function. I'm specializing the template function on wide-character string so that I can do some wide-to-narrow translation before writing the log message. I can't get TCHAR to work properly - it doesn't use the specialization. Ideas? Here's the pertinent code: // Log.h header class Log { public: template <typename T> Log& operator<<( const T& x ); template <typename T> Log& operator<<( const T* x ); template <typename T> Log& operator<<( const T*& x ); ... } template <typename T> Log& Log::operator<<( const T& input ) { printf("ref"); } template <typename T> Log& Log::operator<<( const T* input ) { printf("ptr"); } template <> Log& Log::operator<<( const std::wstring& input ); template <> Log& Log::operator<<( const wchar_t* input ); And the source file // Log.cpp template <> Log& Log::operator<<( const std::wstring& input ) { printf("wstring ref"); } template <> Log& Log::operator<<( const wchar_t* input ) { printf("wchar_t ptr"); } template <> Log& Log::operator<<( const TCHAR*& input ) { printf("tchar ptr ref"); } Now, I use the following test program to exercise these functions // main.cpp - test program int main() { Log log; log << "test 1"; log << L"test 2"; std::string test3( "test3" ); log << test3; std::wstring test4( L"test4" ); log << test4; TCHAR* test5 = L"test5"; log << test4; } Running the above tests reveals the following: // Test results ptr wchar_t ptr ref wstring ref ref Unfortunately, that's not quite right. I'd really like the last one to be "TCHAR", so that I can convert it. According to Visual Studio's debugger, the when I step in to the function being called in test 5, the type is wchar_t*& - but it's not calling the appropriate specialization. Ideas? I'm not sure if it's pertinent or not, but this is on a Windows CE 5.0 device.

    Read the article

  • local user cannot access vsftpd server

    - by Zloy Smiertniy
    I'm currently running a vsftpd server and I added the necessary configurations in vsftpd.conf so that local users can use clients like FileZilla to manage their homes in a server. I found out that only users in the sudoers list access without a problem only they can't download the files, but users that are not sudoers cannot even access their homes from a client but they can access by a web browser using the FTP protocol and they can only access their home directories (as intented) Im running a fedora 14 on my server and my vsftpd.conf looks like this: # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to GAMBITA FTP service # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES use_localtime=YES Anyone has an idea of what might be happening? Nothing concerning vsftpd is written in any log

    Read the article

  • Router 2wire, Slackware desktop in DMZ mode, iptables policy aginst ping, but still pingable

    - by skriatok
    I'm in DMZ mode, so I'm firewalling myself, stealthy all ok, but I get faulty test results from Shields Up that there are pings. Yesterday I couldn't make a connection to game servers work, because ping block was enabled (on the router). I disabled it, but this persists even due to my firewall. What is the connection between me and my router in DMZ mode (for my machine, there is bunch of others too behind router firewall)? When it allows router affecting if I'm pingable or not and if router has setting not blocking ping, rules in my iptables for this scenario do not work. Please ignore commented rules, I do uncomment them as I want. These two should do the job right? iptables -A INPUT -p icmp --icmp-type echo-request -j DROP echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all Here are my iptables: #!/bin/sh # Begin /bin/firewall-start # Insert connection-tracking modules (not needed if built into the kernel). #modprobe ip_tables #modprobe iptable_filter #modprobe ip_conntrack #modprobe ip_conntrack_ftp #modprobe ipt_state #modprobe ipt_LOG # allow local-only connections iptables -A INPUT -i lo -j ACCEPT # free output on any interface to any ip for any service # (equal to -P ACCEPT) iptables -A OUTPUT -j ACCEPT # permit answers on already established connections # and permit new connections related to established ones (eg active-ftp) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #Gamespy&NWN #iptables -A INPUT -p tcp -m tcp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 6667 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 28910 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29900 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29901 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29920 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p udp -m udp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 6500 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27900 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27901 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 29910 -j ACCEPT # Log everything else: What's Windows' latest exploitable vulnerability? iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT" # set a sane policy: everything not accepted > /dev/null iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p icmp --icmp-type echo-request -j DROP # be verbose on dynamic ip-addresses (not needed in case of static IP) echo 2 > /proc/sys/net/ipv4/ip_dynaddr # disable ExplicitCongestionNotification - too many routers are still # ignorant echo 0 > /proc/sys/net/ipv4/tcp_ecn #ping death echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all # If you are frequently accessing ftp-servers or enjoy chatting you might # notice certain delays because some implementations of these daemons have # the feature of querying an identd on your box for your username for # logging. Although there's really no harm in this, having an identd # running is not recommended because some implementations are known to be # vulnerable. # To avoid these delays you could reject the requests with a 'tcp-reset': #iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset #iptables -A OUTPUT -p tcp --sport 113 -m state --state RELATED -j ACCEPT # To log and drop invalid packets, mostly harmless packets that came in # after netfilter's timeout, sometimes scans: #iptables -I INPUT 1 -p tcp -m state --state INVALID -j LOG --log-prefix \ "FIREWALL:INVALID" #iptables -I INPUT 2 -p tcp -m state --state INVALID -j DROP # End /bin/firewall-start Active ruleset: bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 38 packets, 2228 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 844 542K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1158 111K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Active ruleset: (after editing iptables into below sugested form) bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 2567 packets, 172K bytes) pkts bytes target prot opt in out source destination 49 4157 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 412K 441M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2567 172K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 DROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 8 Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 312K packets, 25M bytes) pkts bytes target prot opt in out source destination ping and syslog simultaneous screenshots from phone (pinger) and from laptop (being pinged) http://dl.dropbox.com/u/4160051/slckwr/pingfrom%20mobile.jpg http://dl.dropbox.com/u/4160051/slckwr/tailsyslog.jpg

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • KB2667402 update fails to install with 0x80004005. Yet it's also showing as installed

    - by growse
    I've got 15 Windows 2008 R2 x64 servers that I manage with SCCM2012. I've noticed during my Windows updates reporting that there's two boxes that are showing as 'error' for total updates installed. Digging around, it looks like the update that is failing to install is KB2667402. The Software Centre on the server itself shows the following: The software change returned error code 0x80004005(-2147467259). So SCCM thinks it hasn't installed the update. However, if I go to the Programs and Features application and select 'Windows Updates', I can see an entry for KB2667402: If I try and uninstall this, I get an error: An error occurred. Not all of the updates were successfully uninstalled If I try and download the patch from Microsoft directly, I get the same error installing as displayed in Software Centre. The only odd thing about this setup that I can think would affect this is that I run the RDP service on a non-standard port. However, I do this across all the servers, so it seems odd that it would fail on just 2 out of 15. The tail of the WindowsUpdate.log file is below: 2012-06-26 15:33:53:184 3924 1608 COMAPI ------------- 2012-06-26 15:33:53:190 3924 1608 COMAPI -- START -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:53:190 3924 1608 COMAPI --------- 2012-06-26 15:33:53:190 3924 1608 COMAPI - Allow source prompts: No; Forced: No; Force quiet: Yes 2012-06-26 15:33:53:190 3924 1608 COMAPI - Updates in request: 1 2012-06-26 15:33:53:190 3924 1608 COMAPI - ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed 2012-06-26 15:33:53:199 860 1198 Agent ************* 2012-06-26 15:33:53:199 860 1198 Agent ** START ** Agent: Installing updates [CallerId = CcmExec] 2012-06-26 15:33:53:199 860 1198 Agent ********* 2012-06-26 15:33:53:199 860 1198 Agent * Updates to install = 1 2012-06-26 15:33:53:201 860 1198 Agent * Title = Security Update for Windows Server 2008 R2 x64 Edition (KB2667402) 2012-06-26 15:33:53:201 860 1198 Agent * UpdateId = {48859BE4-1331-4CD2-8E70-3B537180A0D0}.103 2012-06-26 15:33:53:201 860 1198 Agent * Bundles 1 updates: 2012-06-26 15:33:53:201 860 1198 Agent * {D854ECF1-99A7-4D67-B435-2D041BF79565}.103 2012-06-26 15:33:53:204 3924 1608 COMAPI - Updates to install = 1 2012-06-26 15:33:53:204 3924 1608 COMAPI <<-- SUBMITTED -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:53:221 860 1198 Agent WARNING: failed to calculate prior restore point time with error 0x80070002; setting restore point 2012-06-26 15:33:53:222 860 1198 Agent WARNING: LoadLibrary failed for srclient.dll with hr:8007007e 2012-06-26 15:33:53:322 860 1198 DnldMgr Preparing update for install, updateId = {D854ECF1-99A7-4D67-B435-2D041BF79565}.103. 2012-06-26 15:33:53:325 5700 117c Misc =========== Logging initialized (build: 7.5.7601.17514, tz: +0100) =========== 2012-06-26 15:33:53:325 5700 117c Misc = Process: C:\Windows\system32\wuauclt.exe 2012-06-26 15:33:53:325 5700 117c Misc = Module: C:\Windows\system32\wuaueng.dll 2012-06-26 15:33:53:324 5700 117c Handler ::::::::::::: 2012-06-26 15:33:53:325 5700 117c Handler :: START :: Handler: CBS Install 2012-06-26 15:33:53:325 5700 117c Handler ::::::::: 2012-06-26 15:33:53:330 5700 117c Handler Starting install of CBS update D854ECF1-99A7-4D67-B435-2D041BF79565 2012-06-26 15:33:53:342 5700 117c Handler CBS package identity: Package_for_KB2667402~31bf3856ad364e35~amd64~~6.1.2.0 2012-06-26 15:33:53:366 5700 117c Handler Installing self-contained with source=C:\Windows\SoftwareDistribution\Download\44059e0415033d6f699a50ef69dd5ff2\windows6.1-kb2667402-v2-x64.cab, workingdir=C:\Windows\SoftwareDistribution\Download\44059e0415033d6f699a50ef69dd5ff2\inst 2012-06-26 15:33:56:270 5700 3b8 Handler FATAL: CBS called Error with 0x80004005, 2012-06-26 15:33:56:402 5700 117c Handler FATAL: Completed install of CBS update with type=0, requiresReboot=0, installerError=1, hr=0x80004005 2012-06-26 15:33:56:405 5700 117c Handler ::::::::: 2012-06-26 15:33:56:406 5700 117c Handler :: END :: Handler: CBS Install 2012-06-26 15:33:56:406 5700 117c Handler ::::::::::::: 2012-06-26 15:33:56:433 860 1198 Agent ********* 2012-06-26 15:33:56:433 860 1198 Agent ** END ** Agent: Installing updates [CallerId = CcmExec] 2012-06-26 15:33:56:433 860 1198 Agent ************* 2012-06-26 15:33:56:433 860 d14 AU Can not perform non-interactive scan if AU is interactive-only 2012-06-26 15:33:56:450 3924 e40 COMAPI >>-- RESUMED -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:56:450 3924 e40 COMAPI - Install call complete (succeeded = 0, succeeded with errors = 0, failed = 1, unaccounted = 0) 2012-06-26 15:33:56:450 3924 e40 COMAPI - Reboot required = No 2012-06-26 15:33:56:450 3924 e40 COMAPI - WARNING: Exit code = 0x00000000; Call error code = 0x80240022 2012-06-26 15:33:56:451 3924 e40 COMAPI --------- 2012-06-26 15:33:56:451 3924 e40 COMAPI -- END -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:56:451 3924 e40 COMAPI ------------- 2012-06-26 15:33:56:536 860 13a4 AU Triggering Offline detection (non-interactive) 2012-06-26 15:33:56:536 860 d14 AU ############# 2012-06-26 15:33:56:536 860 d14 AU ## START ## AU: Search for updates 2012-06-26 15:33:56:536 860 d14 AU ######### 2012-06-26 15:33:56:539 860 d14 AU <<## SUBMITTED ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:56:539 860 1788 Agent ************* 2012-06-26 15:33:56:539 860 1788 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-06-26 15:33:56:539 860 1788 Agent ********* 2012-06-26 15:33:56:539 860 1788 Agent * Online = No; Ignore download priority = No 2012-06-26 15:33:56:539 860 1788 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2012-06-26 15:33:56:539 860 1788 Agent * ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed 2012-06-26 15:33:56:539 860 1788 Agent * Search Scope = {Machine} 2012-06-26 15:33:58:562 860 1788 Agent * Found 0 updates and 70 categories in search; evaluated appl. rules of 180 out of 1072 deployed entities 2012-06-26 15:33:58:565 860 1788 Agent ********* 2012-06-26 15:33:58:565 860 1788 Agent ** END ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-06-26 15:33:58:565 860 1788 Agent ************* 2012-06-26 15:33:58:650 860 f2c AU >>## RESUMED ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:58:650 860 f2c AU # 0 updates detected 2012-06-26 15:33:58:650 860 f2c AU ######### 2012-06-26 15:33:58:650 860 f2c AU ## END ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:58:650 860 f2c AU ############# 2012-06-26 15:33:58:650 860 f2c AU Featured notifications is disabled. 2012-06-26 15:33:58:651 860 f2c AU Successfully wrote event for AU health state:0 2012-06-26 15:33:58:652 860 f2c AU Successfully wrote event for AU health state:0

    Read the article

  • inserting new relationship data in core-data

    - by michael
    My app will allow users to create a personalised list of events from a large list of events. I have a table view which simply displays these events, tapping on one of them takes the user to the event details view, which has a button "add to my events". In this detailed view I own the original event object, retrieved via an NSFetchedResultsController and passed to the detailed view (via a table cell, the same as the core data recipes sample). I have no trouble retrieving/displaying information from this "event". I am then trying to add it to the list of MyEvents represented by a one to many (inverse) relationship: This code: NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; [myEvents addEventObject:event];//ERROR And this code (suggested below): //would this add to or overwrite the "list" i am attempting to maintain NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; NSMutableSet *myEvent = [myEvents mutableSetValueForKey:@"event"]; [myEvent addObject:event]; //ERROR Bot produce (at the line indicated by //ERROR): *** -[NSComparisonPredicate evaluateWithObject:]: message sent to deallocated instance Seems I may have missed something fundamental. I cant glean any more information through the use of debugging tools, with my knowledge of them. 1) Is this a valid way to compile and store an editable list like this? 2) Is there a better way? 3) What could possibly be the deallocated instance in error? -- I have now modified the Event entity to have a to-many relationship called "myEvents" which referrers to itself. I can add Events to this fine, and logging the object shows the correct memory addresses appearing for the relationship after a [event addMyEventObject:event];. The same failure happens right after this however. I am still at a loss to understand what is going wrong. This is the backtrace #0 0x01f753a7 in ___forwarding___ () #1 0x01f516c2 in __forwarding_prep_0___ () #2 0x01c5aa8f in -[NSFetchedResultsController(PrivateMethods) _preprocessUpdatedObjects:insertsInfo:deletesInfo:updatesInfo:sectionsWithDeletes:newSectionNames:treatAsRefreshes:] () #3 0x01c5d63b in -[NSFetchedResultsController(PrivateMethods) _managedObjectContextDidChange:] () #4 0x0002e63a in _nsnote_callback () #5 0x01f40005 in _CFXNotificationPostNotification () #6 0x0002bef0 in -[NSNotificationCenter postNotificationName:object:userInfo:] () #7 0x01bbe17d in -[NSManagedObjectContext(_NSInternalNotificationHandling) _postObjectsDidChangeNotificationWithUserInfo:] () #8 0x01c1d763 in -[NSManagedObjectContext(_NSInternalChangeProcessing) _createAndPostChangeNotification:withDeletions:withUpdates:withRefreshes:] () #9 0x01ba25ea in -[NSManagedObjectContext(_NSInternalChangeProcessing) _processRecentChanges:] () #10 0x01bdfb3a in -[NSManagedObjectContext processPendingChanges] () #11 0x01bd0957 in _performRunLoopAction () #12 0x01f4d252 in __CFRunLoopDoObservers () #13 0x01f4c65f in CFRunLoopRunSpecific () #14 0x01f4bc48 in CFRunLoopRunInMode () #15 0x0273878d in GSEventRunModal () #16 0x02738852 in GSEventRun () #17 0x002ba003 in UIApplicationMain () solution I managed to get to the bottom of this. I was fetching the event in question using a NSFetchedResultsController with a NSPredicate which I was releasing after I had the results. Retrieving values from the entities returned was no problem, but when I tried to update any of them it gave the error above. It should not have been released. oustanding part of my question What is a good way to create this sub list from a list of existing items in terms of a core data model. I don't believe its any of the ways I tried here. I need to show/edit it in another table view. Perhaps there is a better way than a boolean property on each event entity? The relationship idea above doesn't seem to work here (even though I can now create it). Cheers.

    Read the article

  • Android app hanging, sometimes until Force Close / Wait dialog appears

    - by fredley
    I'm making an app that records uncompressed (wav format) audio. I'm using this class to actually record the audio. Currently, my application records fine (I can play the file), however when I click the button to stop the recording, the app hangs for 10 seconds or so, with no log output or any signs of life. Finally it comes round, dumps a load of errors into the log, updates the UI etc. I'm using AsyncTasks to try and avoid this kind of thing but it's not working. Here's my code: //Called on clicks of the record button. rar is the instance of RehearsalAudioRecorder private OnClickListener RecordListener = new OnClickListener(){ @Override public void onClick(View v) { Log.d("Record","Click"); if (recording){ new stopRecordingTask().execute(rar,null,null); startStop.setText("Record"); statusBar.setText("Recording Finished, ready to Encode"); }else{ recording = true; new startRecordingTask().execute(rar,null,null); startStop.setText("Stop"); statusBar.setText("Recording Started"); } } }; private class startRecordingTask extends AsyncTask<RehearsalAudioRecorder,Void,Void>{ @Override protected Void doInBackground(RehearsalAudioRecorder... rs) { RehearsalAudioRecorder r = rs[0]; r.setOutputFile("/sdcard/rarOut.wav"); r.prepare(); r.start(); return null; } } private class stopRecordingTask extends AsyncTask<RehearsalAudioRecorder,Void,Void>{ @Override protected Void doInBackground(RehearsalAudioRecorder... rs) { RehearsalAudioRecorder r = rs[0]; r.stop(); r.reset(); return null; } } In Logcat, I always get output like this, which has me stumped. I have no idea what's causing it (I'm logging the RehearsalAudioRecorder class, and it's being started/stopped correctly by the button clicks. This output occurs after the log output for the button click and correct stop() method call) 12-19 11:59:11.172: ERROR/AudioRecord-JNI(22662): Unable to retrieve AudioRecord object, can't record 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): Error occured in updateListener, recording is aborted 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): stop() called on illegal state: STOPPED 12-19 11:59:11.172: ERROR/AudioRecord-JNI(22662): Unable to retrieve AudioRecord object, can't record 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): Error occured in updateListener, recording is aborted 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): stop() called on illegal state: ERROR 12-19 11:59:11.172: ERROR/AudioRecord-JNI(22662): Unable to retrieve AudioRecord object, can't record 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): Error occured in updateListener, recording is aborted 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): stop() called on illegal state: ERROR ... 10 or more times I've been fiddling with this all day and I'm not getting anywhere, any input would be greatly appreciated. Update I've replace the AsyncTasks with Threads, still doesn't work, the app completely hangs when I click record, despite the fact the Log indicates there's nothing going on in the main thread. Still completely stumped.

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • firefox, opera 'The connection was reset' on few POST method calls on Windows and Ubuntu

    - by Gopalakrishnan Subramani
    my website works well with GET method, also few POST methods. Some pages with POST method doesn't work. Some pages with POST work. For example, login page uses POST that works fine. When I post the data on webpage, firefox says "Connecting..." and finally report connection timed out error. The same behavior happens with Opera as well. However Google Chrome works fine. At the server side, I use nginx 1.2.4 with HTTPS and uwsgi for python (flask framework) app. I use geotrust certificate. The same behavior happens with Windows 7 and Ubuntu 12.04 on firefox. I tried firefox in safemode, but no luck. Set auto-detect proxy settings. no luck. Cleared all cookies. no luck Anyone help me to fix this issue? I am posting ngix config. shame on me. I use root, I know which is not advised. need to fix soon. user root; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; server { listen 80; server_name www.example.com; rewrite ^(.*) https://example.com$1 permanent; } server { listen 80; server_name example.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; server_name example.com; keepalive_timeout 70; ssl on; ssl_certificate /root/cc.cert; ssl_certificate_key /root/cc.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; #ssl_ciphers HIGH:!aNULL:!MD5; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/tmp/uwsgi.sock; } } } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • Spring MVC application - URL gives No file found (404)

    - by user1700184
    I created a Spring-MVC project. web.xml: <servlet> <servlet-name>mvc-dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>mvc-dispatcher</servlet-name> <url-pattern>/soundmails</url-pattern> </servlet-mapping> mvc-dispatcher-servlet.xml <?xml version="1.0"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:context="http://www.springframework.org/schema/context" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <mvc:annotation-driven /> <context:component-scan base-package="somepkg.controllers" /> <bean id="multipartResolver" class="org.gmr.web.multipart.GMultipartResolver"> <property name="maxUploadSize" value="1048576" /> </bean> <bean id="placeholderConfig" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <!-- property name="location"> <value>/WEB-INF/social.properties</value> </property--> </bean> <bean id="jacksonMessageConverter" class="org.springframework.http.converter.json.MappingJacksonHttpMessageConverter"></bean> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"> <property name="messageConverters"> <list> <ref bean="jacksonMessageConverter"/> </list> </property> </bean> </beans> The controller has this code: ProjectController.java @Controller @RequestMapping("/soundmails") public class FileUploadController { @RequestMapping(value="/test", method=RequestMethod.GET) public @ResponseBody String test() { System.out.println("Hai"); return "Hai"; } } I am using Google App Engine in my local machine to test this. I am getting these in my log: [INFO] Oct 24, 2013 1:54:18 AM com.google.appengine.tools.development.LocalResourceFileServlet doGet [INFO] WARNING: No file found for: /soundmails/test I tried /soundmails/soundmails/test as well. That is also giving the same error. I am using Spring 3.1.0.RELEASE Can someone help me figure out what I am missing - /soundmails/test is giving 404 error. Edit I am unable to enable DEBUG logs for this. For some reason, it is not taking log level configured in logging.properties But I observed something interesting: 1) If I map the request to empty string (value = "") @RequestMapping(value="", method=RequestMethod.GET) public @ResponseBody String test() { System.out.println("Hai"); return "Hai"; } Then, when I try to access 127.0.0.1/soundmails, it works fine (returns string "Hai"). 2) When I have value="/test" @RequestMapping(value="/test", method=RequestMethod.GET) public @ResponseBody String test() { System.out.println("Hai"); return "Hai"; } and I try to access 127.0.0.1/soundmails/test, it is giving HTTP 404. This is weird.

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • Cisco PIX firewall blocking inbound Exchange email

    - by sumsaricum
    [Cisco PIX, SBS2003] I can telnet server port 25 from inside but not outside, hence all inbound email is blocked. (as an aside, inbox on iPhones do not list/update emails, but calendar works a charm) I'm inexperienced in Cisco PIX and looking for some assistance before mails start bouncing :/ interface ethernet0 auto interface ethernet1 100full nameif ethernet0 outside security0 nameif ethernet1 inside security100 hostname pixfirewall domain-name ciscopix.com fixup protocol dns maximum-length 512 fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 no fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names name 192.168.1.10 SERVER access-list inside_outbound_nat0_acl permit ip 192.168.1.0 255.255.255.0 192.168.1.96 255.255.255.240 access-list outside_cryptomap_dyn_20 permit ip any 192.168.1.96 255.255.255.240 access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq 3389 access-list outside_acl permit tcp any interface outside eq ftp access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq https access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq www access-list outside_acl permit tcp any interface outside eq 993 access-list outside_acl permit tcp any interface outside eq imap4 access-list outside_acl permit tcp any interface outside eq 465 access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq smtp access-list outside_cryptomap_dyn_40 permit ip any 192.168.1.96 255.255.255.240 access-list COMPANYVPN_splitTunnelAcl permit ip 192.168.1.0 255.255.255.0 any access-list COMPANY_splitTunnelAcl permit ip 192.168.1.0 255.255.255.0 any access-list outside_cryptomap_dyn_60 permit ip any 192.168.1.96 255.255.255.240 access-list COMPANY_VPN_splitTunnelAcl permit ip 192.168.1.0 255.255.255.0 any access-list outside_cryptomap_dyn_80 permit ip any 192.168.1.96 255.255.255.240 pager lines 24 icmp permit host 217.157.xxx.xxx outside mtu outside 1500 mtu inside 1500 ip address outside 213.xxx.xxx.xxx 255.255.255.128 ip address inside 192.168.1.1 255.255.255.0 ip audit info action alarm ip audit attack action alarm ip local pool VPN 192.168.1.100-192.168.1.110 pdm location 0.0.0.0 255.255.255.128 outside pdm location 0.0.0.0 255.255.255.0 inside pdm location 217.yyy.yyy.yyy 255.255.255.255 outside pdm location SERVER 255.255.255.255 inside pdm logging informational 100 pdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_outbound_nat0_acl nat (inside) 1 0.0.0.0 0.0.0.0 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx 3389 SERVER 3389 netmask 255.255.255.255 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx smtp SERVER smtp netmask 255.255.255.255 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx https SERVER https netmask 255.255.255.255 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx www SERVER www netmask 255.255.255.255 0 0 static (inside,outside) tcp interface imap4 SERVER imap4 netmask 255.255.255.255 0 0 static (inside,outside) tcp interface 993 SERVER 993 netmask 255.255.255.255 0 0 static (inside,outside) tcp interface 465 SERVER 465 netmask 255.255.255.255 0 0 static (inside,outside) tcp interface ftp SERVER ftp netmask 255.255.255.255 0 0 access-group outside_acl in interface outside route outside 0.0.0.0 0.0.0.0 213.zzz.zzz.zzz timeout xlate 0:05:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout sip-disconnect 0:02:00 sip-invite 0:03:00 timeout uauth 0:05:00 absolute aaa-server TACACS+ protocol tacacs+ aaa-server TACACS+ max-failed-attempts 3 aaa-server TACACS+ deadtime 10 aaa-server RADIUS protocol radius aaa-server RADIUS max-failed-attempts 3 aaa-server RADIUS deadtime 10 aaa-server RADIUS (inside) host SERVER *** timeout 10 aaa-server LOCAL protocol local http server enable http 217.yyy.yyy.yyy 255.255.255.255 outside http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server community public no snmp-server enable traps floodguard enable sysopt connection permit-ipsec crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto dynamic-map outside_dyn_map 20 match address outside_cryptomap_dyn_20 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-MD5 crypto dynamic-map outside_dyn_map 40 match address outside_cryptomap_dyn_40 crypto dynamic-map outside_dyn_map 40 set transform-set ESP-3DES-MD5 crypto dynamic-map outside_dyn_map 60 match address outside_cryptomap_dyn_60 crypto dynamic-map outside_dyn_map 60 set transform-set ESP-3DES-MD5 crypto dynamic-map outside_dyn_map 80 match address outside_cryptomap_dyn_80 crypto dynamic-map outside_dyn_map 80 set transform-set ESP-3DES-MD5 crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map client authentication RADIUS LOCAL crypto map outside_map interface outside isakmp enable outside isakmp policy 20 authentication pre-share isakmp policy 20 encryption 3des isakmp policy 20 hash md5 isakmp policy 20 group 2 isakmp policy 20 lifetime 86400 telnet 217.yyy.yyy.yyy 255.255.255.255 outside telnet 0.0.0.0 0.0.0.0 inside telnet timeout 5 ssh 217.yyy.yyy.yyy 255.255.255.255 outside ssh 0.0.0.0 255.255.255.0 inside ssh timeout 5 management-access inside console timeout 0 dhcpd address 192.168.1.20-192.168.1.40 inside dhcpd dns SERVER 195.184.xxx.xxx dhcpd wins SERVER dhcpd lease 3600 dhcpd ping_timeout 750 dhcpd auto_config outside dhcpd enable inside : end I have Kiwi SysLog running but could use some pointers in that regard to narrow down the torrent of log messages, if that helps?!

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >