Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 572/617 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Varnish POST problem "9 FetchError c backend write error: 11" for application/x-www-form-urlencoded content

    - by ompap
    Cutting a longish story short, we have managed to get a more precise error out of Varnishlog. Varnishlog tells us that we are sending a 31 TxRequest - POST 31 TxHeader - Content-Type: application/x-www-form-urlencoded but we are getting 9 FetchError c backend write error: 11 31 BackendClose - [backend name] 9 VCL_call c error 9 VCL_return c deliver 9 Length c 488 9 VCL_call c deliver 9 VCL_return c deliver 9 TxProtocol c HTTP/1.1 9 TxStatus c 503 We still do not know what this is exactly, but apparently Content-Type: application/x-www-form-urlencoded is not getting through as it should. Help still needed, please! Original message below. The title was "Varnish not letting Joomla users to log in - 503 guru meditation error", but I changed it to get more attention to the problem and not to the symptoms. Hello, We have a production site for a local newspaper which is currently behind an Apache reverse proxy, basicly the site on one server and the other being reserved as a reverse proxy only (well, there is more but that has no relevance here). Apache as a reverse proxy works, but could be faster. We want to change the reverse proxy to use Varnish instead of Apache on an Ubuntu 10.4 Server. The Varnish is version 2.10 installed directly from Ubuntu repos. Ubuntu 10.4 uses PHP 5.3.2. For anonymous surfers the site works wonderfully with Varnish. So far we can get very good speed out of Varnish, we just have a few problems with logging in or out. The big one is, that the users cannot log in: they get a Varnish 503 error page every time. The logs do not reveal the cause. It feels as if the request would never leave Varnish. So we are merely guessing - not a strong starting point. We have gone through what has been suggested on various plces on the web. We have increased the timeouts to backend xxx { .host = "xxx.xx"; .port = "http"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; } but we seem to get the 503 guru error page much faster than that, as in approx. 5 seconds. We have increased the Varnish headers size to 128 in daemon. In vcl_recv we have if (req.http.Authenticate || req.http.Authorization) { return(pass); } and in vcl_fetch ## auhtentication handling if (req.http.Authenticate || req.http.Authorization) { return(pass); } We do not strip cookies. We have tried to make sure that error pages are not cached. As said above, we cannot see anything in the backend Apache logs, apparently it never gets asked for Joomla user authentication. Varnish does not seem to get much mentioning with connection to Joomla. (We cannot dump Joomla, that selection has been done and we just have to live with what we have been given) Has anyone a working Varnish - Joomla combination? Thanks for reading. Please help. We need some hints - desperately. Any suggestions? ompap

    Read the article

  • PC: Quick Freeze, then BSOD, then forced reboot, then freezes again

    - by cr0z3r
    Lately I have been experiencing a weird issue. My PC will hang for a second and then BSOD - it stays there, so I have to reboot. Once Windows starts again and I'm logged in, after 1-10min it freezes again, this time without BSOD. RECAP: 1 second freeze BSOD, hangs there I have to force-reboot Once PC rebooted, I log back into Windows second freeze between a 1-10min range, no BSOD (alternatively, I get a freeze with a constant sound/noise, no BSOD) I contacted my PC provider, who told me my graphics-card might be flawed (Quadro 4000), so I used a Quadro 2000 that they lend me. The issue still occurred. The issue now seemed to belong to a flawed RAM module. Following my provider's steps, I removed all but the first from the left column and kept using my PC for a week or so without any issues. I then added the bottom-right module, and so on, until all modules were back inside - I had no problems. Now it seemed that a simple take-out-put-back-in of the RAM modules had fixed the issue. However, after a few months, the issue was back. I redid all the RAM-swapping I had done before, and concluded that the lower-right module was flawed. My provider changed it for another, and everything was great until now. My PC froze again for barely a second, hanged on a BSOD, I rebooted it, logged-in to Windows to get a freeze (without BSOD or reboot) 40 seconds later. Something worth noting, is that every time I reboot after the BSOD, it is something within Chrome that freezes my PC (e.g. this time I clicked the "restore" button as Chrome mentioned it had exited unexpectedly - from the previous freeze obviously - and it instantly froze). Finally, the Event Viewer lists 2 critical events in the past hour as "ID: 41, Type: Kernel-Power". PC-Specs: http://i.imgur.com/VZpbr.jpg Previous Dump-files: https://dl.dropbox.com/u/716600/DumpFiles_08_2012.zip I would like to thank anybody in advance for your help. You're great. UPDATE 1: I realized that I did not mention an awkward fact about this issue. After I have gotten the 1-sec freeze followed by a BSOD, after I rebooted because the BSOD hanged, and after I logged back in to get another (this time, eternal) freeze and rebooted once again, the PC does not boot back up. The power-light is on, but my monitor says "no signal", as if the PC wouldn't really be turned on. This truly seems a like a power-related issue, doesn't it? UPDATE 2: I just got a freeze, but without BSOD. My screen froze (while on Chrome, which is starting to seem suspicious to me) with an ongoing sound/noise. I had to force-reboot my PC. I would say this is a graphics-card issue, but this issue also happened when I was using the Quadro 2000 from my provider. UPDATE 3: I just got a BSOD while trying to render something (quite heavy, actually) in 3ds Max 2012. I left the BSOD "running", as it said it was writing dump files to disk. However, the percentage number stayed at 0, so after 15 minutes I force-rebooted. I then used the software WhoCrashed (thank you Dave) which reported the following from the C:\Windows\Memory.dmp file: On Thu 22.11.2012 22:13:45 GMT your computer crashed crash dump file: C:\Windows\memory.dmp uptime: 01:05:27 This was probably caused by the following module: Unknown () Bugcheck code: 0x124 (0x0, 0xFFFFFA80275AC028, 0xF200001F, 0x100B2) Error: WHEA_UNCORRECTABLE_ERROR Bug check description: This bug check indicates that a fatal hardware error has occurred. This bug check uses the error data that is provided by the Windows Hardware Error Architecture (WHEA). This is likely to be caused by a hardware problem problem. This problem might be caused by a thermal issue. A third party driver was identified as the probable root cause of this system error. It is suggested you look for an update for the following driver: Unknown . Google query: Unknown WHEA_UNCORRECTABLE_ERROR

    Read the article

  • CakePhp on IIS: How can I Edit URL Rewrite module for SSL Redirects

    - by AdrianB
    I've not dealt much with IIS rewrites, but I was able to import (and edit) the rewrites found throughout the cake structure (.htaccess files). I'll explain my configuration a little, then get to the meat of the problem. So my Cake php framework is working well and made possible by the url rewrite module 2.0 which I have successfully installed and configured for the site. The way cake is set up, the webroot folder (for cake, not iis) is set as the default folder for the site and exists inside the following hierarchy inetpub -wwwroot --cakePhp root ---application ----models ----views ----controllers ----WEBROOT // *** HERE *** ---cake core --SomeOtherSite Folder For this implementation, the url rewrite module uses the following rules (from the web.config file) ... <rewrite> <rules> <rule name="Imported Rule 1" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> <rule name="Imported Rule 2" stopProcessing="true"> <match url="^$" ignoreCase="false" /> <action type="Rewrite" url="/" /> </rule> <rule name="Imported Rule 3" stopProcessing="true"> <match url="(.*)" ignoreCase="false" /> <action type="Rewrite" url="/{R:1}" /> </rule> <rule name="Imported Rule 4" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> </rules> </rewrite> I've Installed my SSL certificate and created a site binding so that if i use the https:// protocol, everything is working fine within the site. I fear that attempts I have made at creating a rewrite are too far off base to understand results. The rules need to switch protocol without affecting the current set of rules which pass along url components to index.php (which is cake's entry point). My goal is this- Create a couple of rewrite rules that will [#1] redirect all user pages (in this general form http://domain.com/users/page/param/param/?querystring=value ) to use SSL and then [#2} direct all other https requests to use http (is this is even necessary?). [e.g. http://domain.com/users/login , http://domain.com/users/profile/uid:12345 , http://domain.com/users/payments?firsttime=true] ] to all use SSL [e.g. https://domain.com/users/login , https://domain.com/users/profile/uid:12345 , https://domain.com/users/payments?firsttime=true] ] Any help would be greatly appreciated.

    Read the article

  • Google Apps e-mail being rejected from some domains

    - by Paul J. Lucas
    I'm migrating e-mail for my domains to Google Apps' e-mail. Most everything seems to work except e-mail sent to any user at (at least) sonic.net is rejected with a message of the form (where any-address has been substituted for my friend's address): From: Mail Delivery Subsystem <[email protected]> Date: March 11, 2010 10:04:48 AM PST To: [email protected] Subject: Delivery Status Notification (Failure) Delivered-To: [email protected] Received: by 10.229.194.26 with SMTP id dw26cs8717qcb; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr3841599fai.62.1268330688325; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr5119424fai.62; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Mime-Version: 1.0 Return-Path: <> X-Failed-Recipients: [email protected] Message-Id: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.1.1 <[email protected]>... No such user here (state 13). And here are the headers from the message it bounces back: Received: by 10.101.90.7 with SMTP id s7mr2515885anl.176.1267979929490; Sun, 07 Mar 2010 08:38:49 -0800 (PST) Return-Path: <[email protected]> Received: from [10.0.1.203] (adsl-76-201-171-194.dsl.pltn13.sbcglobal.net [76.201.171.194]) by mx.google.com with ESMTPS id 4sm1046550yxd.70.2010.03.07.08.38.48 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 07 Mar 2010 08:38:49 -0800 (PST) From: "Paul J. Lucas" <[email protected]> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Some fascinating subject Date: Sun, 7 Mar 2010 08:38:46 -0800 References: <[email protected]> To: [email protected] Message-Id: <[email protected]> Mime-Version: 1.0 (Apple Message framework v1077) X-Mailer: Apple Mail (2.1077) However, I am able to send mail to a user at sonic.net using my old e-mail account. Also, my company uses Google Apps for e-mail and I can send e-mail to a user at sonic.net from my company. The differences between my personal e-mail and my company's are: My company's domain has no SPF record whereas mine does. My company's domain has an A record whereas mine does not. My SPF record initially was as prescribed by Google here. However, this guy claims Google is wrong and gives a fix. I've tried it both ways with no difference. My SPF record is currently: v=spf1 mx include:aspmx.googlemail.com include:_spf.google.com ~all As for the lack of an A record, you wouldn't think that a mail host would care about that so long as mx records are defined. However, the funny thing is that if you look at the error message, why does Google state that the recipient's domain stated that there is "No such user here" for my address? That makes no sense. Of course there is no user having my address at sonic.net. Also, I assume that I just discovered that I can't send mail to users at sonic.net by accident and that there are probably other domains I can't send e-mail to. So... anybody have any idea what's going on? And how I can get mail to users at sonic.net?

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • mod_mono 'Service Temporarily Unavailable' issue

    - by Charlie Somerville
    I've deployed an ASP.NET web application on a Linux (Debian) server running Apache 2.2 and mod_mono 1.9 It's working well, however Mono occasionally segfaults and uses the entire CPU which causes the website to stop working and display 'Service Temporarily Unavailable' Killing mono fixes it, but obviously this isn't a good solution. I tailed the system log after this happened and I saw the following error messages from the kernel: Apr 20 01:49:37 charliesomerville kernel: [1596436.204158] mono[17909]: segfault at b645f671 ip b645f671 sp b4ffb604 error 4<6>mono[19047]: segfault at b645f66e ip b645f66e sp b4bf7604 error 4<6>mono[18017]: segfault at b645f66e ip b645f66e sp b52fe604 error 4<6>mono[19668]: segfault at b645f5e6 ip b645f5e6 sp b48f4604 error 4<6>mono[22565]: segfault at b645f674 ip b645f674 sp b45f1604 error 4<6>mono[17700]: segfault at b645f661 ip b645f661 sp b51fd604 error 4<6>mono[19596]: segfault at b645f5e6 ip b645f5e6 sp b49f5604 error 4 Apr 20 01:49:37 charliesomerville kernel: [1596436.208172] mono[23219]: segfault at b645f66e ip b645f66e sp b44f0604 error 4 At the end of Apache's error.log are the following errors: [Tue Apr 20 03:10:23 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 System.ArgumentNullException: null key Parameter name: key at System.Collections.Hashtable.get_Item (System.Object key) [0x00000] at System.Runtime.Serialization.SerializationCallbacks.GetSerializationCallbacks (System.Type t) [0x00000] at System.Runtime.Serialization.ObjectManager.RaiseOnDeserializingEvent (System.Object obj) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectContent (System.IO.BinaryReader reader, System.Runtime.Serialization.Formatters.Binary.TypeMetadata metadata, Int64 objectId, System.Object& objectInstance, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectInstance (System.IO.BinaryReader reader, Boolean isRuntimeObject, Boolean hasTypeInfo, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObject (BinaryElement element, System.IO.BinaryReader reader, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadNextObject (System.IO.BinaryReader reader) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectGraph (System.IO.BinaryReader reader, Boolean readHeaders, System.Object& result, System.Runtime.Remoting.Messaging.Header[]& headers) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.NoCheckDeserialize (System.IO.Stream serializationStream, System.Runtime.Remoting.Messaging.HeaderHandler handler) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize (System.IO.Stream serializationStream) [0x00000] at System.Runtime.Remoting.Channels.CADSerializer.DeserializeObject (System.IO.MemoryStream mem) [0x00000] at System.Runtime.Remoting.RemotingServices.GetDomainProxy (System.AppDomain domain) [0x00000] at System.AppDomain.CreateDomain (System.String friendlyName, System.Security.Policy.Evidence securityInfo, System.AppDomainSetup info) [0x00000] at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] at Mono.WebServer.ApplicationServer.GetApplicationForPath (System.String vhost, Int32 port, System.String path, Boolean defaultToRoot) [0x00000] at (wrapper remoting-invoke-with-check) Mono.WebServer.ApplicationServer:GetApplicationForPath (string,int,string,bool) at Mono.WebServer.ModMonoWorker.GetOrCreateApplication (System.String vhost, Int32 port, System.String filepath, System.String virt) [0x00000] at Mono.WebServer.ModMonoWorker.InnerRun (System.Object state) [0x00000] at Mono.WebServer.ModMonoWorker.Run (System.Object state) [0x00000] [Tue Apr 20 03:10:26 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:26 2010] [error] Command stream corrupted, last command was -1 Along with the above errors, Apache's error.log is packed with hundreds (if not thousands) of the following error: Maximum number (20) of concurrent mod_mono requests to /tmp/mod_mono_dashboard_default_2.lock reached. Droping request. At the moment, I'm thinking there might be something wrong with configuration here (it's basically running on out-of-the-box config)

    Read the article

  • WDS DHCP same server on Windows Server 2008

    - by Richard
    I have been struggling with a problem on my Windows Server 2008 for the past 4 - 5 hours and cannot figure out whats wrong. I have tried pretty much everything that I found on google and all the links are purple. Hopefully you guys can help me. I am running a Windows Server 2008 Standard edition with the latest updates as of today. Furthermore I am running a Windows Server 2003. Both are virtual machines on my ESXi 5 server. My network is: 192.168.10.0/24 W2k8: 192.168.10.251 is the PDC running ADS, DHCP and WDS W2k3: 192.168.10.253 AND 192.168.1.175 running Routing and Remote Access and ISA 2006 Enterprise In my internal network (192.168.10.0/24) I have my client machine (192.168.10.10) that runs a VMWare Workstation. I am trying to deploy Windows 7 Home Premium to a virtual machine on my VMWorkstation via PXE. I have set the Workstation's VM network adapter to "bridged" so that it uses the physical network adapter and is connected to my internal network. The DHCP pool is configured to give IP addresses from 192.168.10.10-192.168.10.15 (works for normal clients and is not used up) When I start my VM with the PXE I get the error: PXE-E52:proxyDHCP offers were received. No DHCP offers were received Apperently this means "that means that WDS responded but the DHCP server did not." People suggested to direct the traffic to both WDS and DHCP on the router, since everything is on the same subnet there is no need for that as the broadcast is seen by everyone (WDS and DHCP) No reservation for the virtual mac addrs is made on the DHCP. Furthermore it was suggested to configure the DHCP options: Option 60= PXEClient Option 66= WDS server name or IP address Option 67= Boot file name However, this is not recommended by Microsoft, I tried it and it did not solve my problem. The configuration on the WDS (My System is German therefore the actual naming might be different): PXE response tab: PXE responses is set to "ALL (known and unknown)" DHCP Tab: Do not listen to port 67 is NOT ticked - if I tick this I do not get any responses and the PXE errors gets PXE-E51 that neither DHCP or proxyDHCP were received DHCP-Option 60 for "PXEClient" is ticked The confusing part here is that it is advised in the tab to tick the first option since it is on the same server. Network Configuration Tab: Use the following IP-Address range for Multicast-IP-Address: 224.0.1.0 - 224.0.10.0 Thats not the default one, however it is in the allowed range. The UDP port range is the default since it is not advised to change them. I tried to change the "networkprofile" from 100mbits/1gbits and custom. I am running a 1gbit network with CAT6 cables and 1gbit netgear switch 5 ports. Everything is configured to use 1gbit. The WDS is authorised for the DHCP server. My ISA 2006 configuration: For the internal networking i have configured the following policy array: Allow protocols on internal network including the w2k3 host: 67,68,53,ICMP, 4011 UDP receive, 64001-6500 UDP send receive, 69 UDP send Routing and Remote Access I tried the DHCP relay agent configuration that was suggested as well, but that did not work I would highly appreciate anykind of help because I am pretty much done here with my nerves. Thank you very much in advance.

    Read the article

  • Securing smtp with login

    - by Paul Peelen
    I have a ispconfig server, and it seems that someone is using it to send spam. I got about 130 "Mail Delivery System" email about declined send email. This spammer uses my email address as sent from adress, so I get all these email adresses to my mail. I am using Postfix and Courier. I installed my server according to this guide: http://www.howtoforge.com/perfect-server-debian-lenny-ispconfig3-p3 I did this a few months ago. My question: Can I secure my server to require login to be able to send email, and if so... how? Thanks! EDIT Some data from mail.log, these kind of error show up constantly: Jun 15 17:58:16 bolt postfix/qmgr[10712]: CC7DA1242AE: from=<paul@*****.se>, size=3782, nrcpt=1 (queue active) Jun 15 17:58:16 bolt postfix/smtp[11337]: CC7DA1242AE: to=<[email protected]>, relay=none, delay=4641, delays=4640/0.01/0.32/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=cmlisboa.pt type=MX: Host not found, try again) Jun 15 17:58:19 bolt postfix/smtpd[10836]: connect from static-200-105-220-154.acelerate.net[200.105.220.154] Jun 15 17:58:20 bolt postfix/smtpd[10836]: NOQUEUE: reject: RCPT from static-200-105-220-154.acelerate.net[200.105.220.154]: 550 5.1.1 <advertising@*****.com>: Recipient address rejected: User unknown in virtual mailbox table; from=<[email protected]> to=<advertising@*****.com> proto=ESMTP helo=<static-200-105-220-154.acelerate.net> Jun 15 17:58:20 bolt postfix/smtpd[10836]: lost connection after DATA (0 bytes) from static-200-105-220-154.acelerate.net[200.105.220.154] Jun 15 17:58:20 bolt postfix/smtpd[10836]: disconnect from static-200-105-220-154.acelerate.net[200.105.220.154] Jun 15 17:58:29 bolt postfix/smtpd[10834]: connect from unknown[62.176.172.226] Jun 15 17:58:32 bolt postfix/smtpd[10834]: 386791241F9: client=unknown[62.176.172.226] Jun 15 17:58:34 bolt postfix/cleanup[10975]: 386791241F9: message-id=<[email protected]> Jun 15 17:58:34 bolt postfix/qmgr[10712]: 386791241F9: from=<[email protected]>, size=867, nrcpt=1 (queue active) Jun 15 17:58:35 bolt postfix/smtpd[10834]: disconnect from unknown[62.176.172.226] Jun 15 17:58:35 bolt amavis[11084]: (11084-17) Blocked SPAM, [62.176.172.226] [62.176.172.226] <[email protected]> -> <*****@*****>, Message-ID: <[email protected]>, mail_id: XczovKoMBYNr, Hits: 18.471, size: 867, 833 ms Jun 15 17:58:35 bolt postfix/smtp[10732]: 386791241F9: to=<*****@*****>, relay=127.0.0.1[127.0.0.1]:10024, delay=3.5, delays=2.7/0/0/0.83, dsn=2.7.0, status=sent (250 2.7.0 Ok, discarded, id=11084-17 - SPAM) Jun 15 17:58:35 bolt postfix/qmgr[10712]: 386791241F9: removed Jun 15 17:58:43 bolt postfix/smtpd[10836]: warning: 178.121.154.194: address not listed for hostname mm-194-154-121-178.dynamic.pppoe.mgts.by Jun 15 17:58:43 bolt postfix/smtpd[10836]: connect from unknown[178.121.154.194] Jun 15 17:58:45 bolt postfix/smtpd[10727]: connect from unknown[180.134.223.86] EDIT #2 Got some more info from the logs, this is a send request: mail.info.1:Jun 15 16:41:57 bolt amavis[5399]: (05399-06) Passed CLEAN, [110.139.48.64] [110.139.48.64] <paul@*****.se> -> <[email protected]>, Message-ID: <CHILKAT-MID-7c54ebcf-5501-de9b-f0b1-4f0234290d8d@HP-IRISH>, mail_id: 35l56Ramx6Nc, Hits: -2.941, size: 3329, queued_as: 2485770086, 136 ms mail.info.1:Jun 15 16:41:57 bolt postfix/smtp[4743]: 375C570082: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=4.8, delays=4.7/0/0/0.14, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=05399-06, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 2485770086) Which apparently got thrue. Any ideas how to restrict this?

    Read the article

  • Secure method of changing a user's password via Python script/non-interactively

    - by Matthew Rankin
    I've created a Python script using Fabric to configure a freshly built Slicehost Ubuntu slice. In case you're not familiar with Fabric, it uses Paramiko, a Python SSH2 client, to provide remote access "for application deployment or systems administration tasks." One of the first things I have the Fabric script do is to create a new admin user and set their password. Unlike Pexpect, Fabric cannot handle interactive commands on the remote system, so I need to set the user's password non-interactively. At present, I'm using the chpasswd command to change the password. This transmits the password as clear text over SSH to the remote system. Questions Is my current method of setting the password a security concern? Currently, the drawback I see is that Fabric shows the password as clear text on my local system as follows: [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd. Since I only run the Fabric script from my laptop, I don't think this is a security issue, but I'm interested in others' input. Is there a better method for setting the user's password non-interactively? Another option, would be to use Pexpect from within the Fabric script to set the password. Current Code # Fabric imports and host configuration excluded for brevity root_password = getpass.getpass("Root's password given by SliceManager: ") admin_username = prompt("Enter a username for the admin user to create: ") admin_password = getpass.getpass("Enter a password for the admin user: ") env.user = 'root' env.password = root_password # Create the admin group and add it to the sudoers file admin_group = 'admin' run('addgroup {group}'.format(group=admin_group)) run('echo "%{group} ALL=(ALL) ALL" >> /etc/sudoers'.format( group=admin_group) ) # Create the new admin user (default group=username); add to admin group run('adduser {username} --disabled-password --gecos ""'.format( username=admin_username) ) run('adduser {username} {group}'.format( username=admin_username, group=admin_group) ) # Set the password for the new admin user run('echo "{username}:{password}" | chpasswd'.format( username=admin_username, password=admin_password) ) Local System Terminal I/O $ fab config_rebuilt_slice Root's password given by SliceManager: Enter a username for the admin user to create: johnsmith Enter a password for the admin user: [xxx.xx.xx.xxx] run: addgroup admin [xxx.xx.xx.xxx] out: Adding group `admin' (GID 1000) ... [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "%admin ALL=(ALL) ALL" >> /etc/sudoers [xxx.xx.xx.xxx] run: adduser johnsmith --disabled-password --gecos "" [xxx.xx.xx.xxx] out: Adding user `johnsmith' ... [xxx.xx.xx.xxx] out: Adding new group `johnsmith' (1001) ... [xxx.xx.xx.xxx] out: Adding new user `johnsmith' (1000) with group `johnsmith' ... [xxx.xx.xx.xxx] out: Creating home directory `/home/johnsmith' ... [xxx.xx.xx.xxx] out: Copying files from `/etc/skel' ... [xxx.xx.xx.xxx] run: adduser johnsmith admin [xxx.xx.xx.xxx] out: Adding user `johnsmith' to group `admin' ... [xxx.xx.xx.xxx] out: Adding user johnsmith to group admin [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd [xxx.xx.xx.xxx] run: passwd --lock root [xxx.xx.xx.xxx] out: passwd: password expiry information changed. Done. Disconnecting from [email protected]... done.

    Read the article

  • Mac OS X 10.8 VPN Server: Bypass VPN for LAN traffic (routing LAN traffic to secondary connection)

    - by Dan Robson
    I have somewhat of an odd setup for a VPN server with OS X Mountain Lion. It's essentially being used as a bridge to bypass my company's firewall to our extranet connection - certain things our team needs to do require unfettered access to the outside, and changing IT policies to allow traffic through the main firewall is just not an option. The extranet connection is provided through a Wireless-N router (let's call it Wi-Fi X). My Mac Mini server is configured with the connection to this router as the primary connection, thus unfettered access to the internet via the router. Connections to this device on the immediate subnet are possible through the LAN port, but outside the subnet things are less reliable. I was able to configure the VPN server to provide IP addresses to clients in the 192.168.11.150-192.168.11.200 range using both PPTP and L2TP, and I'm able to connect to the extranet through the VPN using the standard Mac OS X VPN client in System Preferences, however unsurprisingly, a local address (let's call it internal.company.com) returns nothing. I tried to bypass the limitation of the VPN Server by setting up Routes in the VPN settings. Our company uses 13.x.x.x for all internal traffic, instead of 10.x.x.x, so the routing table looked something like this: IP Address ---------- Subnet Mask ---------- Configuration 0.0.0.0 248.0.0.0 Private 8.0.0.0 252.0.0.0 Private 12.0.0.0 255.0.0.0 Private 13.0.0.0 255.0.0.0 Public 14.0.0.0 254.0.0.0 Private 16.0.0.0 240.0.0.0 Private 32.0.0.0 224.0.0.0 Private 64.0.0.0 192.0.0.0 Private 128.0.0.0 128.0.0.0 Private I was under the impression that if nothing was entered here, all traffic was routed through the VPN. With something entered, only traffic specifically marked to go through the VPN would go through the VPN, and all other traffic would be up to the client to access using its own default connection. This is why I had to specifically mark every subnet except 13.x.x.x as Private. My suspicion is that since I can't reach the VPN server from outside the local subnet, it's not making a connection to the main DNS server and thus can't be reached on the larger network. I'm thinking that entering hostnames like internal.company.com aren't kicked back to the client to resolve, because the server has no idea that the IP address falls in the public range, since I suspect (probably should ping test it but don't have access to it right now) that it can't reach the DNS server to find out anything about that hostname. It seems to me that all my options for resolving this all boil down to the same type of solution: Figure out how to reach the DNS with the secondary connection on the server. I'm thinking that if I'm able to do [something] to get my server to recognize that it should also check my local gateway (let's say Server IP == 13.100.100.50 and Gateway IP == 13.100.100.1). From there Gateway IP can tell me to go find DNS Server at 13.1.1.1 and give me information about my internal network. I'm very confused about this path -- really not sure if I'm even making sense. I thought about trying to do this client side, but that doesn't make sense either, since that would add time to each and every client side setup. Plus, it just seems more logical to solve it on the server - I could either get rid of my routing table altogether or keep it - I think the only difference would be that internal traffic would also go through the server - probably an unnecessary burden on it. Any help out there? Or am I in over my head? Forward proxy or transparent proxy is also an option for me, although I have no idea how to set either of those up. (I know, Google is my friend.)

    Read the article

  • Multiple Homed Windows 2008 Server / Windows 7 Client

    - by Daniel Scott
    I have a small Windows 2008 network, with some Windows 7 clients. The clients are both laptops with docking stations and I would like them to communicate with the Windows 2008 server (for filesharing) through the wired network whilst they're docked. Internet connectivity for all machines (clients and server) is via a Wireless LAN, so the wireless adapter in the Windows 7 clients stays active while they're docked. When the laptops are un-docked, it would be nice to still be able to contact the windows 2008 server for print sharing (and slower file sharing) - hence the server also being on the wireless LAN. The windows 2008 server is running Active Directory, DHCP and DNS. It controls DHCP leases on the wired network and holds the DNS records for "myserver.mycompany.local", which is what the filesharing clients connect to. Ideally I'd like the DNS records to return the wired IP first so that this is the address that the laptops will attempt initially - but there doesn't seem to be a way to do that? At present the server's IP on the wireless LAN comes out of an nslookup above the wired Lan IP. The multi-homing works perfectly - but in the wrong order! Switch on the wireless lan and ping myserver and it goes to the wireless IP. Disable the wireless on the client and do the same ping again and after a couple of seconds it starts pinging the wired address. Does anyone have any suggestions on how to make this work in a predictable order? - or even if it can work. Alternative 1? If it can't work, then would this work: Remove the wireless adapter from the server, put a wireless router/bridge on the wired network (set up to route to/from the wireless LAN's subnet), then configure the clients with two routes to the (now) single IP of the server with metrics favouring direct communication over the wired LAN first? Alternative 2? Should I instead single-home the laptops so all of their connectivity is via the wired-LAN while they're docked? (and route via the windows 2008 server - or a dedicated wireless bridge/router)? My concern here is that I'd like undocking to be seamless - and if the clients are in the middle of downloading something from the internet I wouldn't want whatever they're doing interupted as they switch IP addresses onto the Wireless network. Perhaps this isn't the case and I'm concerned over nothing? Any thoughts? :) UPDATE I seem to have cracked it (at least DNS entries come out in the order I hope for - and pinging the server with various combinations of wired, wireless and both interfaces enabled uses the IP I want) ... I set the binding order of the NICs on the Server (which is acting as Domain Controller, DHCP and DNS server) so that the Wired NIC is before the Wireless adapter. (Start -- type "Network Interfaces" -- Select "View Network Connections" -- Press Alt to show classic dropdown menus -- Advanced -- Advanced Settings) Now, an nslookup (from the client) of the server's hostname returns the Wired IP first, followed by the Wireless IP. The wired IP now seems to be used whenever it's contactable. Incidentally, the metrics on the wired and wireless routes (on the client) also favour the wired LAN (based on Windows' automatically assigned metrics) - but this was always the case, even when I was having trouble getting the wired IP to be "favoured". I'm not entirely sure if this is coincidence - or if a DNS server running on Windows, handing back IP addresses for itself does actually take the binding order of it's own network interfaces into account? It would be interesting to hear from someone who can confirm or deny that (or confirm that the binding order on the server plays a role for some other reason?)

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • Issue in setting up VPN connection (IKEv1) using android (ICS vpn client) with Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using android (ICS vpn client) and Strongswan 4.5.0 server. Below is the set up: Strongswan server is running on ubuntu linux machine which is connected to some wifi hotspot. Using the steps in this guide link, I generated CA, server and client certificate. Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on android device. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212, android device eth0 interface ip address: 192.168.43.62; Android device is also attached with the same wifi hotspot. On the Android device, I uses IPsec Xauth RSA option for setting up VPN authentication configuration. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.62 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add      With the above configurations when I enable VPN on android device, VPN connection is not successful and it gets timed out in Authentication phase. I ran wireshark on both the android device and strongswan server, from the tcpdump below are the observations. Initially Identity Protection (Main mode) exchanges happens between device and server and all are successful. After all successful Identity Protection (Main mode) exchanges server is sending Transaction (Config mode) to device. In reply android device is sending Informational message instead of Transaction (Config mode) message. Further server is keep on sending Transaction (Config mode) message and device is again sending Identity Protection (Main mode) messages. Finally timeout happens and connection fails. I also capture Strongswan server logs and below are the snippets from the server logs which also verifies the same(described above). Apr 27 21:09:40 Linux pluto[12105]: | **parse ISAKMP Message: Apr 27 21:09:40 Linux pluto[12105]: | initiator cookie: Apr 27 21:09:40 Linux pluto[12105]: | 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | responder cookie: Apr 27 21:09:40 Linux pluto[12105]: | 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | next payload type: ISAKMP_NEXT_HASH Apr 27 21:09:40 Linux pluto[12105]: | ISAKMP version: ISAKMP Version 1.0 Apr 27 21:09:40 Linux pluto[12105]: | exchange type: ISAKMP_XCHG_INFO Apr 27 21:09:40 Linux pluto[12105]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 27 21:09:40 Linux pluto[12105]: | message ID: a2 80 ad 82 Apr 27 21:09:40 Linux pluto[12105]: | length: 92 Apr 27 21:09:40 Linux pluto[12105]: | ICOOKIE: 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | RCOOKIE: 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | peer: c0 a8 2b 3e Apr 27 21:09:40 Linux pluto[12105]: | state hash entry 25 Apr 27 21:09:40 Linux pluto[12105]: | state object not found Apr 27 21:09:40 Linux pluto[12105]: packet from 192.168.43.62:500: Informational Exchange is for an unknown (expired?) SA Apr 27 21:09:40 Linux pluto[12105]: | next event EVENT_RETRANSMIT in 10 seconds for #9 Can anyone please provide update on this issue. Why the VPN connection gets timed out and why the ISAKMP exchanges are not proper between Android and strongswan server.

    Read the article

  • Troubleshooting unwanted NTP Traffic

    - by Jaxaeon
    A domain controller running Windows Server 2012 is sending NTP and NETBIOS traffic to an address that has never been configured as a time provider. The server logs give no indication that any NTP traffic is failing. The only place I see any evidence of this traffic is in pfSense system logs: (Blocked) Jun 9 08:48:50 DOMAIN 10.0.1.100:123 192.128.127.254:123 UDP (Blocked) Jun 9 08:48:53 DOMAIN 10.0.1.100:137 192.128.127.254:137 UDP As far as I can tell the NTP service is working normally otherwise: DC2.domain.com[10.0.1.101:123]: ICMP: 0ms delay NTP: -0.0131705s offset from DC1.domain.com RefID: DC1.domain.com [10.0.1.100] Stratum: 3 DC1.domain.com *** PDC ***[10.0.1.100:123]: ICMP: 0ms delay NTP: +0.0000000s offset from DC1.domain.com RefID: clock1.albyny.inoc.net [64.246.132.14] Stratum: 2 The time provider NtpClient is currently receiving valid time data from 1.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->204.2.134.163:123). The time provider NtpClient is currently receiving valid time data from 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). The time service is now synchronizing the system time with the time source 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). I've been inside and out of the NTP configuration and cannot find any reason for this traffic. Reverse DNS points the destination address to nothing.attdns.com. pinging nothing.attdns.com from the domain controller in question leads to a response from loopback (127.0.0.2) which makes my head hurt. Any ideas? EDIT1: It should probably be noted that after a dns flush, nslookup 192.128.127.254 returns nothing.attdns.com. 192.128.127.254 is not present in domain.com DNS records. The attdns.com domain is not present in cached lookups. 127.in-addr.arpa is clean of any funkyness. EDIT2: The loopback ping response from nothing.attdns.com is possibly unrelated. Machines on other networks are also displaying this behavior. EDIT3: As mentioned in the comments, I tracked the problem network adapter back to my pfSense VM hosted in esxi 5.5 (I know shame on me for virtualizing a firewall). pfSense was configured to use DC1.domain.com as its primary time provider, but upon changing it back to pool.ntp.org the problem persists. pfSense logs give no indication of NTP misconfiguration. Everywhere I can think to look this VM is identified as 10.0.1.253, so I still have no idea why it’s sending NTP requests as 192.128… Since this firewall was a temporary solution to a problem that no longer exists so I am going to decommission it. EDIT4: The queries were coming from another machine sharing the same virtual adapter as the firewall. The machine has two local adapters: one for LAN, and the other for attached hardware that uses an Ethernet connection. That hardware sits in the the mystery subnet, and the machine is broadcasting NTP requests over both adapters.

    Read the article

  • Process not Listed by PS or in /proc/

    - by Hammer Bro.
    I'm trying to figure out how to operate a rather large Java program, 'prog'. If I go to its /bin/ dir and configure its setenv.sh and prog.sh to use local directories and my current user account. Then I try to run it via "./prog.sh start". Here are all the relevant bits of prog.sh: USER=(my current account) _CMD="/opt/jdk/bin/java -server -Xmx768m -classpath "${CLASSPATH}" -jar "${DIR}/prog.jar"" case "${ACTION}" in start) nohup su ${USER} -c "exec ${_CMD} >>${_LOGFILE} 2>&1" >/dev/null & echo $! >${_PID} echo "Prog running. PID="`cat ${_PID}` ;; stop) PID=`cat ${_PID} 2>/dev/null` echo "Shutting down prog: ${PID} kill -QUIT ${PID} 2>/dev/null kill ${PID} 2>/dev/null kill -KILL ${PID} 2>/dev/null rm -f ${_PID} echo "STOPPED `date`" >>${_LOGFILE} ;; When I actually do ./prog.sh start, it starts. But I can't find it at all on the process list. Nor can I kill it manually, using the same command the shell script uses. But I can tell it's running, because if I do ./prog.sh stop, it stops (and some temporary files elsewhere clean themselves out). ./prog.sh start Prog running. PID=1234 ps eaux | grep 1234 ps eaux | grep -i prog.jar ps eaux >> pslist.txt (It's not there either by PID or any clear name I can find: prog, java or jar.) cd /proc/1234/ -bash: cd: /proc/1234/: No such file or directory kill -QUIT 1234 kill 1234 kill -KILL 1234 -bash: kill: (1234) - No such process ./prog.sh stop Shutting down prog: 1234 As far as I can tell, the process is running yet not in any way listed by the system. I can't find it in ps or /proc/, nor can I kill it. But the shell script can still stop it properly. So my question is, how can something like this happen? Is the process supremely hidden, actually unlisted, or am I just missing it in some fashion? I'm trying to figure out what makes this program tick, and I can barely prove that it's ticking! Edit: ps eu | grep prog.sh (after having restarted; so random PID) 50038 19381 0.0 0.0 4412 632 pts/3 S+ 16:09 0:00 grep prog.sh HOSTNAME=machine.server.com TERM=vt100 SHELL=/bin/bash HISTSIZE=1000 SSH_CLIENT=::[STUFF] 1754 22 CVSROOT=:[DIR] SSH_TTY=/dev/pts/3 ANT_HOME=/opt/apache-ant-1.7.1 USER=[USER] LS_COLORS=[COLORS] SSH_AUTH_SOCK=[DIR] KDEDIR=/usr MAIL=[DIR] PATH=[DIRS] INPUTRC=/etc/inputrc PWD=[PWD] JAVA_HOME=/opt/jdk1.6.0_21 LANG=en_US.UTF-8 SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass M2_HOME=/opt/apache-maven-2.2.1 SHLVL=1 HOME=[~] LOGNAME=[USER] SSH_CONNECTION=::[STUFF] LESSOPEN=|/usr/bin/lesspipe.sh %s G_BROKEN_FILENAMES=1 _=/bin/grep OLDPWD=[DIR] I just realized that the stop) part of prog.sh isn't actually a guarantee that the process it claims to be stopping is running -- it just tries to kill the PID and suppresses all output then deletes the temporary file and manually inserts STOPPED into the log file. So I'm no longer so certain that the process is always running when I ps for it, although the code sample above indicates that it at least runs erratically. I'll continue looking into this undocumented behemoth when I return to work tomorrow.

    Read the article

  • Screen -X exec commands not working until manually attached

    - by James Watt
    I have a batch script that starts a java server application inside of a screen. The command looks like this: cd /dir/ && screen -A -m -d -S javascreen java -Xms640M -Xmx1024M -jar javaserverapp.jar nogui After I run the batch script, it starts the server and puts it inside the correct screen. If I list my screens after, I see something like this: user@gtwy /dir $ screen -list There is a screen on: 16180.javascreen (Detached) 1 Socket in /var/run/screen/S-user. However, I have a second batch script that sends automated commands to this server and runs on a different crontab interval. Because of the way the application works, I send commands to it like this (this command tells it to alert connected users "testing 123"): screen -X exec .\!\! echo say testing 123 I've also tried: screen -R -X exec .\!\! echo say testing 123 screen -S javascreen -X exec .\!\! echo say testing 123 Unfortunately, these commands DO NOT WORK. They don't even give me an error message, they just do nothing. HOWEVER - If I manually attach to the screen first (with the below command) and then detach, now I can run any of the above commands flawlessly. I can demonstrate this with a video, if I wasn't clear enough here. screen -r -d Thanks in advance. Update: here is the important parts of /etc/screenrc. It should be totally vanilla, I've never edited this file. # VARIABLES # =============================================================== # No annoying audible bell, using "visual bell" # vbell on # default: off # vbell_msg " -- Bell,Bell!! -- " # default: "Wuff,Wuff!!" # Automatically detach on hangup. autodetach on # default: on # Don't display the copyright page startup_message off # default: on # Uses nethack-style messages # nethack on # default: off # Affects the copying of text regions crlf off # default: off # Enable/disable multiuser mode. Standard screen operation is singleuser. # In multiuser mode the commands acladd, aclchg, aclgrp and acldel can be used # to enable (and disable) other user accessing this screen session. # Requires suid-root. multiuser off # Change default scrollback value for new windows defscrollback 1000 # default: 100 # Define the time that all windows monitored for silence should # wait before displaying a message. Default 30 seconds. silencewait 15 # default: 30 # bufferfile: The file to use for commands # "readbuf" ('<') and "writebuf" ('>'): bufferfile $HOME/.screen_exchange # # hardcopydir: The directory which contains all hardcopies. # hardcopydir ~/.hardcopy # hardcopydir ~/.screen # # shell: Default process started in screen's windows. # Makes it possible to use a different shell inside screen # than is set as the default login shell. # If begins with a '-' character, the shell will be started as a login shell. # shell zsh # shell bash # shell ksh shell -$SHELL # shellaka '> |tcsh' # shelltitle '$ |bash' # emulate .logout message pow_detach_msg "Screen session of \$LOGNAME \$:cr:\$:nl:ended." # caption always " %w --- %c:%s" # caption always "%3n %t%? @%u%?%? [%h]%?%=%c" # advertise hardstatus support to $TERMCAP # termcapinfo * '' 'hs:ts=\E_:fs=\E\\:ds=\E_\E\\' # set every new windows hardstatus line to somenthing descriptive # defhstatus "screen: ^En (^Et)" # don't kill window after the process died # zombie "^["

    Read the article

  • Mod_Rewrite Apache ProxyPass ?

    - by Anon
    I have two websites; OLDSITE and NEWSITE. The OLDSITE has 120 IP Address that it has with it, and the NEWSITE had 5. I want to be able to separate everything from OLDSITE and NEWSITE so they are not tied together but use them on the same linux computer. My current apache setup is this: Listen 80 NameVirtualHost * <VirtualHost *> ServerName oldsite.com ServerAdmin [email protected] DocumentRoot /var/www/ <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.oldsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.oldsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.oldsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost newsite.com> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ <Directory /var/newsite/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.newsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.newsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.newsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost *> ServerName panel.oldsite.com ProxyPass / http://panel.oldsite.com:10000/ ProxyPassReverse / http://panel.oldsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> <VirtualHost *> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> I want to be able to access anything that is newsite.com and have it go to the /var/newsite unless their is a home directory...and then if its panel.newsite.com I want it to automatically do a proxypass to panel.newsite.com:10000... With this setup, it works perfect for oldsite.com.... both the proxy and the webpages... However, having the Virtualhost set to newsite.com renders the proxypass worthless. If I change the Virtualhost for the newsite.com to a wildcard, the proxypass will work but anything thats a subdomain of newsite.com won't work. so newsite.com will work, but www.newsite.com will not load correctly. I am assuming that when everything is wildcarded, then the ServerName somewhat acts like a RewriteCond and actually just applies the stuff to that URL. It uses the Virtualhost * (oldsite.com) and lets ANYTHING.oldsite.com work, but the second virtualhost * (newsite.com) only newsite.com will work... www.newsite.com will not. If I change the order of them, the opposite is true. So apparently it doesn't like me using 2 wildcards... I tried just making the Servername *.newsite.com .......but that would be too easy. I am not sure what I can do to do what I want? Perhaps I should make the ProxyPass included in the VirtualHosts and use something like: RewriteCond %{HTTP_HOST} ^panel\.newsite\.com$ [NC] RewriteRule ^(.*)$ http://panel.newsite.com:10000/ [P] ProxyPassReverse / http://panel.newsite.com:10000/ but that doesnt seem to want to login to webmin, it loads the login page but isnt working how the ProxyPass & ProxyPassReverse does.

    Read the article

  • Local DNS server (bind) and the router DHCP

    - by Luca
    I just set up an internal http server for internal use (I set up Redmine), in a small network (30 or so PCs). I set up the http server on a virtual box ubuntu, that runs also the DNS server (bind). In the DNS lookup I added the Redmine server name (redmine.engserver <- 192.168.1.14) and as forwarders the outside ISP DNS IP adresses. I am using a small wi-fi router (ASUS RT-N66U) as DHCP (and as gateway). In the DHCP config page I set up as DNS the ubuntu server IP (it is fixed 192.168.1.14). Now when I connect a new PC to the network, the DHCP router issues its new IP and as DNS servers it issues: primary: 192.168.1.14 (ubuntu machine) and seconary 192.168.1.1 (the router itself). ipconfig /all Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DHCPv6 IAID . . . . . . . . . . . : 248539109 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-15-AA-3F-D0-67-E5-49-A7-EF DNS Servers . . . . . . . . . . . : 192.168.1.14 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Before changing the DHCP setting on the router, I would always get only one DNS server: 192.168.1.1 (which uses probably DNS forwarding to external public DNS services). The problem is this: If in my browser I type www.google.com, it works all the time. If in the browser I type http://redmine.engserver/ it works most of the time, but sometimes it ends up with a yahoo page search or something else. In the DNS cache it shows as (Server not found). ipconfig /displaydns I looked with wireshark and it seems like sometimes the client PC interrogates the secondary DNS (192.168.1.1) instead of the first 192.168.1.14. Obviously this one is a public domain and it does not have the redmine.engserver entry. What is wrong in this configuration? Is it even legitimate to have 2 DNS (one internal and one forwarded by the router) which are inconsistent? Is there another way to have a local name service in a small office network? Why is the router DHCP issuing itself as DNS?

    Read the article

  • Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)?

    - by András Szepesházi
    I'm a web application developer using my notebook as a standalone development environment (WAMP stack). I just switched from a Core2-duo Vista 32 bit notebook with 2Gb RAM and SATA HDD, to an i5-2520M Win7 64 bit with 4Gb RAM and 128 GB SDD (Corsair P3 128). My initial experience was what I expected, fast boot, quick load of all the applications (Eclipse takes now 5 seconds as opposed to 30s on my old notebook), overall great experience. Then I started to build up my development stack, both as LAMP (using VirtualBox with a debian guest) and WAMP (windows native apache + mysql + php). I wanted to compare those two. This still all worked great out, then I started to pull in my projects to these stacks. And here came the nasty surprise, one of those projects produced a lot worse response times than on my old notebook (that was true for both the VirtualBox and WAMP stack). Apache, php and mysql configurations were practically identical in all environments. I started to do a lot of benchmarking and profiling, and here is what I've found: All general benchmarks (Performance Test 7.0, HDTune Pro, wPrime2 and some more) gave a big advantage to the new notebook. Nothing surprising here. Disc specific tests showed that read/write operations peaked around 380M/160M for the SSD, and all the different sized block operations also performed very well. Started apache performance benchmarking with Apache Benchmark for a small static html file (10 concurrent threads, 500 iterations). Old notebook: min 47ms, median 111ms, max 156ms New WAMP stack: min 71ms, median 135ms, max 296ms New LAMP stack (in VirtualBox): min 6ms, median 46ms, max 175ms Right here I don't get why the native WAMP stack performed so bad, but at least the LAMP environment brought the expected speed. Apache performance measurement for non-cached php content. The php runs a loop of 1000 and generates sha1(uniqid()) inisde. Again, 10 concurrent threads, 500 iterations were used for the benchmark. Old notebook: min 0ms, median 39ms, max 218ms New WAMP stack: min 20ms, median 61ms, max 186ms New LAMP stack (in VirtualBox): min 124ms, median 704ms, max 2463ms What the hell? The new LAMP performed miserably, and even the new native WAMP was outperformed by the old notebook. php + mysql test. The test consists of connecting to a database and reading a single record form a table using INNER JOIN on 3 more (indexed) tables, repeated 100 times within a loop. Databases were identical. 10 concurrent threads, 100 iterations were used for the benchmark. Old notebook: min 1201ms, median 1734ms, max 3728ms New WAMP stack: min 367ms, median 675ms, max 1893ms New LAMP stack (in VirtualBox): min 1410ms, median 3659ms, max 5045ms And the same test with concurrency set to 1 (instead of 10): Old notebook: min 1201ms, median 1261ms, max 1357ms New WAMP stack: min 399ms, median 483ms, max 539ms New LAMP stack (in VirtualBox): min 285ms, median 348ms, max 444ms Strictly for my purposes, as I'm using a self contained development environment (= low concurrency) I could be satisfied with the second test's result. Though I have no idea why the VirtualBox environment performed so bad with higher concurrency. Finally I performed a test of including many php files. The application that I mentioned at the beginning, the one that was performing so bad, has a heavy bootstrap, loads hundreds of small library and configuration files while initializing. So this test does nothing else just includes about 100 files. Concurrency set to 1, 100 iterations: Old notebook: min 140ms, median 168ms, max 406ms New WAMP stack: min 434ms, median 488ms, max 604ms New LAMP stack (in VirtualBox): min 413ms, median 1040ms, max 1921ms Even if I consider that VirtualBox reached those files via shared folders, and that slows things down a bit, I still don't see how could the old notebook outperform so heavily both new configurations. And I think this is the real root of the slow performance, as the application uses even more includes, and the whole bootstrap will occur several times within a page request (for each ajax call, for example). To sum it up, here I am with a brand new high-performance notebook that loads the same page in 20 seconds, that my old notebook can do in 5-7 seconds. Needless to say, I'm not a very happy person right now. Why do you think I experience these poor performance values? What are my options to remedy this situation?

    Read the article

  • synchronization of file locations between two machines

    - by intuited
    Although similar threads have been asked on this site and its siblings before, I've not managed to glean the answer to this persistent question. Any help is much appreciated. The situation: I've got two laptops; both contain a ton of music. Sometimes I move these music files to different locations, or change the metadata in them, or convert them to a different format. I might do any of these things on either machine. I rarely do all of them at once — ie it's unlikely that I'll convert a file's format and move it to a different location all in one go. I'd like to be able to synchronize these changes without having to sift through everything that was renamed or moved. I'm familiar with rsync but I find it inadequate, because although it can compute checksums, it doesn't have any way to store them. So if a file differs, it can't figure out which side it changed on. This also means that it can't attempt to match a missing file to a new one with the same checksum (ie a move) if the filesize and date are the same, it , so it takes an epoch to do a sync on a large repository. I would like to only check the checksum if the files even if you turn on checksumming, it still doesn't use it intelligently: ie it checksums files even if the sizes differ. IIRC. it's not able to use file metadata as a means of file comparison. this is sort of a wishlist item but it seems doable. I've also looked into rsnapshot, but its requirement to create a full backup is impractical in this situation. I don't need a backup, I just need a record of what file with each hash was where when. Unison seems like it might be able to do something vaguely along these lines, but I'm loathe to spend hours wading through its details only to discover that it's sadly lacking. Plus, it's fun asking questions on here. What I'd like is a tool that does something along these lines: keeps track of file checksums or of actual renames, possibly using inotify to greatly reduce resource consumption/latency stores a database containing this info, along with other pertinencies like the file format and metadata, the actual inode, the filename history, etc. uses this info to provide more-intelligent synchronization with a counterpart on the other side. So for example: if a file has been converted from flac to ogg, but kept the same base filename, or the same metadata, it should be able to send the new version over, and the other side should delete the original. Probably it should actually sequester it somewhere in case they or you screwed up, but that's a detail. And then when the transaction is done, the state is logged so that the next time the two interact they can work out their differences. Maybe all this metadata stuff is a fancy pipe dream. I would actually be pretty happy if there was something out there that could just use checksums in an intelligent way. This would be sort of like having the intelligence of something like git, minus the need to duplicate data in an index/backup/etc (and branching, and checkouts, and all the other great stuff that RCSs do. basically just fast forward commit pushes are all I want, with maybe the option to roll back.) So is there something out there that can do this? If not, can someone suggest a good way to start making it?

    Read the article

  • overriding new ubuntu installation

    - by tkoomzaaskz
    I've got a ubuntu 11.10 which has lost its support in May 2013, now I'd like to reintall up to the most up-to-date LTS, which is 12.04. My question is regarding my current partitions and doing backups. Is there a safe way to backup my data on some local partitions instead of copying files into DVDs/external drives (this is very uncormortable in my situation). Following are system commands shoing my disk: $ lsblk NAME MAJ:MIN RM SIZE RO MOUNTPOINT sda 8:0 0 232,9G 0 +-sda1 8:1 0 48,8G 0 +-sda2 8:2 0 63G 0 +-sda3 8:3 0 1K 0 +-sda4 8:4 0 53,7G 0 / +-sda5 8:5 0 18,6G 0 +-sda6 8:6 0 25,5G 0 +-sda7 8:7 0 23,3G 0 [SWAP] sr0 11:0 1 1024M 0 and $ sudo fdisk -l [sudo] password for xyz: Disk /dev/sda: 250.1 GB, 250059350016 bytes glowic: 255, sektorów/sciezke: 63, cylindrów: 30401, w sumie sektorów: 488397168 Jednostka = sektorów, czyli 1 * 512 = 512 bajtów Rozmiar sektora (logiczny/fizyczny) w bajtach: 512 / 512 Rozmiar we/wy (minimalny/optymalny) w bajtach: 512 / 512 Identyfikator dysku: 0xc3ffc3ff Device Boot Beginning End Blocks ID System /dev/sda1 * 2048 102402047 51200000 7 HPFS/NTFS/exFAT /dev/sda2 215044096 347080703 66018304 7 HPFS/NTFS/exFAT /dev/sda3 347082750 488392064 70654657+ 5 Extended /dev/sda4 102402048 215042047 56320000 83 Linux /dev/sda5 395905923 434975939 19535008+ 83 Linux /dev/sda6 434976003 488392064 26708031 83 Linux /dev/sda7 347082752 395905023 24411136 82 Linux swap / Solaris In the beginning I had Windows Vista pre-installed with the machine when it was bought (damn!) and I installed linux (the one I have now). The windows-program in master boot record has been overriden by grub and now I can boot with both Windows and Linux. This is list of mounted devices: $ mount /dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/tomasz/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tomasz) It's strange (I don't remember such thing) that my current linux uses only one partition (/dev/sda4). But, anyway, it seems like that. My final question is: am I able to use one of the existing linux partitions for a backup and install ubuntu 12.04 without removing neither windows nor ubuntu 11.04? I mean - will grub automatically accept both old windows vista and 2 linuxes (old 11.10 and "new" 12.04)? Is there any hidden operation done while installation that could harm my custom-backup-partition while installing? my fstab file: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=d44e89f5-9da2-48eb-83b3-887652ec95d2 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=bbe50535-ba57-434a-9272-211d859f0e00 none swap sw 0 0 sda5 and sda6 are trash partitions created during unsuccessful linux installation (this was linux installation before my current installation), I didn't delete these partitions, but I have access to them (and I can use them as backup partitions). edit: second question is: why does lsblk show /dev/sda having 232,9G while fdisk shows that it has 250.1GB? Where does the difference come from?

    Read the article

  • Computer makes hissing noise, turns off after few seconds

    - by Kaustubh P
    I have a problem similar to the questions posted here and here. This is my config: Asus M3N78-EM, with AMD Phenom X3 720 2800 Black Edition, 4GB Transcend DDR2 RAM, Nvidia 9400GT. HD is a 160 GB IDE, and a LG IDE DVD-ROM. The power button is a bit off, I have removed the cover of the switch, and the only way it turns on is just giving the "stick" under the cover a gentle press. It turns on sometimes, and at other times, I have to cut-off the power from the PSU, and try again. I will describe my problem in as detail as possible, please bear with me: The problem has started in the last week, a few months after I changed the to the powerswitch arrangement as described above. The PC makes a hissing noise, and I wasn't able to pin-point the noise source, because of the various other fans. At first, removing the HD, rebooting w/o the HD, turning it off, reconnecting and booting made the problem go away. But of late, it doesn't happen. As suggested in the other questions, I tried reducing the load by disconnecting both the IDE drives, and the problem (noise + turn-off) still occurs. I also connected another 80G IDE HD,today morning, adn it still made that noise, and turned off. I also opened up the PSU, but I couldn't see any fault in that, I tried rotating the fan by blowing into the blades, and with my fingers, but the hissing noise didn't come from there. Or maybe the speed wasn't enough to evoke that noise. A few weeks ago: I had cleaned the Cabinet and had repasted the processor and its fan using some thermal paste. Could that be at fault? I also used a vacuum to blow the dust out of the PSU, could the power have been too much, to maybe offset the fan or something? A label on the PSU says it uses a ball-bearing fan. That only leaves me with the Processor fan and the processor itself. I didn't try removing the processor fan and processor from the motherboard, and then turning the PC on, fearing damage. Will doing so cause any damage? What can I do to localize and pin-point the problem? Also, after a few tries, the Computer starts up. Sometimes it turns of within 2 seconds, sometimes after the POST. Once it turned off at the grub. Another time it booted completely and then turned off. The only way to ensure that the PC wont turn off, is if the hissing noise stops. EDIT: I suspect it to be the Processor/Processor fan, owing to the source of noise. All the config, except for the Cabinet, is just over a year old. EDIT2: I also just remembered, that I had set the "On-power resume" to turn on, i.e. If I supply he PC with power, it will turn itself on, w/o me needing to press the switch. I had done that to workaround the faulty power-switch, as noted above. EDIT3: I calculated the power my system needs, from the antec site, and I just arrived at 292W

    Read the article

  • conflict in debian packages

    - by Alaa Alomari
    I have Debian 4 server (i know it is very old) cat /etc/issue Debian GNU/Linux 4.0 \n \l I have the following in /etc/apt/sources.list deb http://debian.uchicago.edu/debian/ stable main deb http://ftp.debian.org/debian/ stable main deb-src http://ftp.debian.org/debian/ stable main deb http://security.debian.org/ stable/updates main apt-get upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 but it is not installable E: Unmet dependencies. Try using -f. Now it shows that i have Debian 6!! cat /etc/issue Debian GNU/Linux 6.0 \n \l EDIT I have tried apt-get update Get: 1 http://debian.uchicago.edu stable Release.gpg [1672B] Hit http://debian.uchicago.edu stable Release Ign http://debian.uchicago.edu stable/main Packages/DiffIndex Hit http://debian.uchicago.edu stable/main Packages Get: 2 http://security.debian.org stable/updates Release.gpg [836B] Hit http://security.debian.org stable/updates Release Get: 3 http://ftp.debian.org stable Release.gpg [1672B] Ign http://security.debian.org stable/updates/main Packages/DiffIndex Hit http://security.debian.org stable/updates/main Packages Hit http://ftp.debian.org stable Release Ign http://ftp.debian.org stable/main Packages/DiffIndex Ign http://ftp.debian.org stable/main Sources/DiffIndex Hit http://ftp.debian.org stable/main Packages Hit http://ftp.debian.org stable/main Sources Fetched 3B in 0s (3B/s) Reading package lists... Done apt-get dist-upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 E: Unmet dependencies. Try using -f. apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies...Done The following extra packages will be installed: gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin libc6 Suggested packages: glibc-doc Recommended packages: libc6-i686 The following packages will be REMOVED libc6-dev libedit-dev libexpat1-dev libgcrypt11-dev libjpeg62-dev libmcal0-dev libmhash-dev libncurses5-dev libpam0g-dev libsablot0-dev libtool libttf-dev The following NEW packages will be installed gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin The following packages will be upgraded: libc6 1 upgraded, 5 newly installed, 12 to remove and 349 not upgraded. 7 not fully installed or removed. Need to get 0B/5050kB of archives. After unpacking 23.1MB disk space will be freed. Do you want to continue [Y/n]? y Preconfiguring packages ... dpkg: regarding .../libc-bin_2.11.3-2_i386.deb containing libc-bin: package uses Breaks; not supported in this dpkg dpkg: error processing /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb (--unpack): unsupported dependency problem - not installing libc-bin Errors were encountered while processing: /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Now: it seems there is a conflict!! how can i fix it? and is it true that the server has became debian 6!!?? Thanks for your help

    Read the article

  • In what (small) ways can I modify Octave's compile options to enhance it without breaking it?

    - by irrational John
    If the title to this question seems a bit vague, I am sorry. But I wasn't sure how to distill what I am attempting to do into a single sentence. A few weeks back I learned that I could build and install recent releases of Octave on an Ubuntu 12.04 system by following the steps below. Install the tools needed to compile, link, and run octave. For Ubuntu the commands below have worked for me. sudo apt-get build-dep octave3.2 sudo apt-get install build-essential gnuplot gtk2-engines-pixbuf sudo apt-get install libfontconfig-dev bison Next, download the source code for an Octave release from the Gnu Project Archives for Octave and unpack the archive into a folder on your system. Use the commands below to build, check, and install octave. ./configure make make check sudo make install Unfortunately it turns out that the above builds an Octave that contains all the debugging symbol tables. The object files alone are huge taking up around 1.7 GB. The current Octave documentation suggests To compile without debugging symbols try the command make CFLAGS=-O CXXFLAGS=-O LDFLAGS= instead of just make. However, when I tried this it did not work. The -g option was still used for the compiles. For the heck of it I instead tried ./configure CFLAGS=-O CXXFLAGS=-O and this did work. (Instead of ~1.7GB the result of the build now takes up around 253MB). My questions are Is this actually the correct (recommended?) method to use to compile Octave without debugging symbols (i.e. without -g)? How would I compile Octave so it uses x86_64 rather than x86? Note: I am not asking how to compile Octave to use the (experimental) 64-bit integers for array dimensions. I just want to allow the compiler to use the extra registers and word sizes available when an app runs in 64-bit mode. Is a (more) complete list available for the directives used with the Octave Makefile? I have only seen make, make check, and make install documented. But apparently make distclean is also allowed. (It removes the compilation results so you can do a complete rebuild of everything.) I'm wondering what else might be available. FWIW, I have tried using ./configure CFLAGS="-O3 -mtune=core2 -m64" CXXFLAGS="-O3 -mtune=core2 -m64" and, surprisingly, it not only appeared to build, but also ran and passed the make check tests. The ./configure script even gave me the (deceptively?) reassuring message "Octave is now configured for x86_64-unknown-linux-gnu". But of course that's not the same thing as saying it actually "works". Is there a recommended way to enable Octave to run as an x86_64 app? I have also tried looking inside the Octave Makefile to see if I could decipher what command line directives it accepts. I got nowhere. I have not a single clue as to how that Makefile does whatever it is that it does.

    Read the article

  • How to automate org-refile for multiple todo

    - by lawlist
    I'm looking to automate org-refile so that it will find all of the matches and re-file them to a specific location (but not archive). I found a fully automated method of archiving multiple todo, and I am hopeful to find or create (with some help) something similar to this awesome function (but for a different heading / location other than archiving): https://github.com/tonyday567/jwiegley-dot-emacs/blob/master/dot-org.el (defun org-archive-done-tasks () (interactive) (save-excursion (goto-char (point-min)) (while (re-search-forward "\* \\(None\\|Someday\\) " nil t) (if (save-restriction (save-excursion (org-narrow-to-subtree) (search-forward ":LOGBOOK:" nil t))) (forward-line) (org-archive-subtree) (goto-char (line-beginning-position)))))) I also found this (written by aculich), which is a step in the right direction, but still requires repeating the function manually: http://stackoverflow.com/questions/7509463/how-to-move-a-subtree-to-another-subtree-in-org-mode-emacs ;; I also wanted a way for org-refile to refile easily to a subtree, so I wrote some code and generalized it so that it will set an arbitrary immediate target anywhere (not just in the same file). ;; Basic usage is to move somewhere in Tree B and type C-c C-x C-m to mark the target for refiling, then move to the entry in Tree A that you want to refile and type C-c C-w which will immediately refile into the target location you set in Tree B without prompting you, unless you called org-refile-immediate-target with a prefix arg C-u C-c C-x C-m. ;; Note that if you press C-c C-w in rapid succession to refile multiple entries it will preserve the order of your entries even if org-reverse-note-order is set to t, but you can turn it off to respect the setting of org-reverse-note-order with a double prefix arg C-u C-u C-c C-x C-m. (defvar org-refile-immediate nil "Refile immediately using `org-refile-immediate-target' instead of prompting.") (make-local-variable 'org-refile-immediate) (defvar org-refile-immediate-preserve-order t "If last command was also `org-refile' then preserve ordering.") (make-local-variable 'org-refile-immediate-preserve-order) (defvar org-refile-immediate-target nil) "Value uses the same format as an item in `org-refile-targets'." (make-local-variable 'org-refile-immediate-target) (defadvice org-refile (around org-immediate activate) (if (not org-refile-immediate) ad-do-it ;; if last command was `org-refile' then preserve ordering (let ((org-reverse-note-order (if (and org-refile-immediate-preserve-order (eq last-command 'org-refile)) nil org-reverse-note-order))) (ad-set-arg 2 (assoc org-refile-immediate-target (org-refile-get-targets))) (prog1 ad-do-it (setq this-command 'org-refile))))) (defadvice org-refile-cache-clear (after org-refile-history-clear activate) (setq org-refile-targets (default-value 'org-refile-targets)) (setq org-refile-immediate nil) (setq org-refile-immediate-target nil) (setq org-refile-history nil)) ;;;###autoload (defun org-refile-immediate-target (&optional arg) "Set current entry as `org-refile' target. Non-nil turns off `org-refile-immediate', otherwise `org-refile' will immediately refile without prompting for target using most recent entry in `org-refile-targets' that matches `org-refile-immediate-target' as the default." (interactive "P") (if (equal arg '(16)) (progn (setq org-refile-immediate-preserve-order (not org-refile-immediate-preserve-order)) (message "Order preserving is turned: %s" (if org-refile-immediate-preserve-order "on" "off"))) (setq org-refile-immediate (unless arg t)) (make-local-variable 'org-refile-targets) (let* ((components (org-heading-components)) (level (first components)) (heading (nth 4 components)) (string (substring-no-properties heading))) (add-to-list 'org-refile-targets (append (list (buffer-file-name)) (cons :regexp (format "^%s %s$" (make-string level ?*) string)))) (setq org-refile-immediate-target heading)))) (define-key org-mode-map "\C-c\C-x\C-m" 'org-refile-immediate-target) It sure would be helpful if aculich, or some other maven, could please create a variable similar to (setq org-archive-location "~/0.todo.org::* Archived Tasks") so users can specify the file and heading, which is already a part of the org-archive-subtree functionality. I'm doing a search and mark because I don't have the wherewithal to create something like org-archive-location for this setup. EDIT: One step closer -- almost home free . . . (defun lawlist-auto-refile () (interactive) (beginning-of-buffer) (re-search-forward "\* UNDATED") (org-refile-immediate-target) ;; cursor must be on a heading to work. (save-excursion (re-search-backward "\* UNDATED") ;; must be written in such a way so that sub-entries of * UNDATED are not searched; or else infinity loop. (while (re-search-backward "\* \\(None\\|Someday\\) " nil t) (org-refile) ) ) )

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >