Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 546/609 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • Glassfish V3 won't start

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 and tried to run the GlasshFish V3 server. I'm working under Windows Vista 32 Bits. First, it won't run. Then I modified the c:\Windows\System32\drivers\etc\hosts file and put the following line into it: 127.0.0.1 localhost And when I run the GlasshFish V3 Server, no error is showing but only "INFOs" are displayed: 3 avr. 2010 19:23:19 com.sun.enterprise.glassfish.bootstrap.ASMain main INFO: Launching GlassFish on Felix platform Welcome to Felix ================ INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:24 CEST 2010 INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:25 CEST 2010 INFO: Grizzly Framework 1.9.18-k started in: 423ms listening on port 35127 INFO: GlassFish v3 (74.2) startup time : Felix(4456ms) startup services(1709ms) total(6165ms) INFO: Grizzly Framework 1.9.18-k started in: 459ms listening on port 35116 INFO: Grizzly Framework 1.9.18-k started in: 428ms listening on port 35155 INFO: Grizzly Framework 1.9.18-k started in: 470ms listening on port 35160 INFO: Grizzly Framework 1.9.18-k started in: 513ms listening on port 35159 INFO: javassist.util.proxy.ProxyFactory.classLoaderProvider = org.glassfish.weld.WeldActivator$GlassFishClassLoaderProvider@5be8f4 INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 INFO: Binding RMI port to *:35165 INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. INFO: JMXStartupService: Started JMXConnector, JMXService URL = service:jmx:rmi://PC-de-Charlotte:35165/jndi/rmi://PC-de-Charlotte:35165/jmxrmi INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate INFO: [Thread[GlassFish Kernel Main Thread,5,main]] started INFO: Grizzly Framework 1.9.18-k started in: 150ms listening on port 35159 INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Program Files\sges-v3\glassfish\modules\autostart, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-330907148519261411, felix.fileinstall.filter = null} INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-2938963288421854459, felix.fileinstall.filter = null} INFO: Grizzly Framework 1.9.18-k started in: 95ms listening on port 35160 INFO: Updating configuration from org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: Installed C:\Program Files\sges-v3\glassfish\modules\autostart\org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-6474085409014899009, felix.fileinstall.filter = null} And there is no message such as "Glassfish started"! So, when I try to access to the admin web interface: localhost:4848 or localhost:8080 or localhost:8181 , It doesn't work. What should I do? Thank you very much, Regards.

    Read the article

  • Basic Auth on DirectoryIndex Only

    - by Brad
    I am trying to configure basic auth for my index file, and only my index file. I have configured it like so: <Files index.htm> Order allow,deny Allow from all AuthType Basic AuthName "Some Auth" AuthUserFile "C:/path/to/my/.htpasswd" Require valid-user </Files> When I visit the page, 401 Authorization Required is returned as expected, but the browser doesn't prompt for the username/password. Some further inspection has revealed that Apache is not sending the WWW-Authenticate header. GET http://myhost/ HTTP/1.1 Host: myhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 401 Authorization Required Date: Tue, 21 Jun 2011 21:36:48 GMT Server: Apache/2.2.16 (Win32) Content-Length: 401 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Authorization Required</title> </head><body> <h1>Authorization Required</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> </body></html> Why is Apache doing this? How can I configure it to send that header appropriately? It is worth noting that this exact same set of directives work fine if I set them for a whole directory. It is only when I configure them to a directory index that they do not work. This is how I know my .htpasswd and such are fine. I am using Apache 2.2 on Windows. On another note, I found this listed as a bug in Apache 1.3. This leads me to believe that this is actually a configuration problem on my end.

    Read the article

  • ESXi 5.1 ghettoVCB stuck at Clone: 10% done

    - by stormdrain
    Trying to run ghettoVCB for the first time here. I am using a NAS that is set up as a datastore on the host. I did a dry run and it completed without error. The VM is ~500GB and there is only one on the host that I'm trying to backup. I proceeded to start the actual backup: ./ghettoVCB.sh -m vmname -g ghettoVCB.conf It goes though the config and looks like it's taking off: 2013-10-24 11:43:19 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ghettoVCB.conf 2013-10-24 11:43:19 -- info: CONFIG - VERSION = 2013_01_11_0 2013-10-24 11:43:19 -- info: CONFIG - GHETTOVCB_PID = 17398616 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas2tb-001/esxi4 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2013-10-24_11-43-18 2013-10-24 11:43:19 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2013-10-24 11:43:19 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2013-10-24 11:43:19 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4 2013-10-24 11:43:19 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2013-10-24 11:43:19 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2013-10-24 11:43:19 -- info: CONFIG - LOG_LEVEL = info 2013-10-24 11:43:19 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2013-10-24_11-43-18-17398616.log 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_COMPRESSION = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2013-10-24 11:43:19 -- info: CONFIG - ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP = 0 2013-10-24 11:43:19 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2013-10-24 11:43:19 -- info: CONFIG - VM_SHUTDOWN_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - VM_STARTUP_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - EMAIL_LOG = 0 2013-10-24 11:43:19 -- info: 2013-10-24 11:43:22 -- info: Initiate backup for vmname 2013-10-24 11:43:22 -- info: Creating Snapshot "ghettoVCB-snapshot-2013-10-24" for serv2 Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/esxi4-storage/vmname/vmname_1.vmdk'... Clone: 10% done. and it's been that way for over an hour now. Stuck at Clone: 10% done.. Thing is: I can see the vmdk on the NAS. And it looks like almost the whole thing is there. On the NAS it's showing ~430GB but on vSphere Client Summary is shows as 507GB. I don't see the vmdk on the NAS growing any more. The logfile mimics some of the above and is sitting at "Creating Snapshot..." and nothing else is coming in. Is the vmdk on the NAS showing all those GB because of the provisioning or something? i.e. is the size of the file not necessarily indicative of the amount of actual data that has been copied? Is there are reason it might be "Stuck" at 10%? i.e. could it really be taking this long? Any other tips? Thanks. Edit: as soon as I hit the Submit button, I glance over to see that it has incremented to 11% done. Good to know it'll be complete sometime around when the sun explodes.

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • Confusion on networking service start/stop in Ubuntu

    - by Daniel Ball
    I'm preparing to move and took down two of my servers, leaving only one with some essential services running. What I neglected to consider was that one was the DHCP server(which I realized when somebody contacted me saying they couldn't connect. Whups). So because I only have a few hosts on this small network, I opted to just statically configure them for now. One of these is a new Ubuntu 11.04 server, where I have very little experience. I edited /etc/network/interfaces and /etc/hosts to reflect my changes. I ran $sudo /etc/init.d/networking stop *deconfiguring network interfaces ... So yay. Then I try to start, it gives me the mumbo jumbo about using services (why didn't it do that for the stop?) So instead I run ... $sudo service networking start networking stop/waiting Now, to me that says the status of the service is stopped. But when I ping another computer, I get a successful reply. So is it not actually stopped? More importantly, am I doing something wrong? Edit daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking stop stop: Unknown instance: daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking start networking stop/waiting daniel@FOOBAR:~$ sudo service networking status networking stop/waiting So you can see why I ran /etc/init.d/networking stop instead. For some reason upstart (that is what "services" is, right?) isn't working with stop. cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 FOOBAR 198.3.9.2 FOOBAR #Added entry July 19 2011 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # hostname FOOBAR auto eth0 iface eth0 inet static address 198.3.9.2 netmask 255.255.255.0 network 198.3.9.0 broadcast 198.3.9.255 gateway 198.3.9.15 No I didn't save backups, it was just a minor change so I just commented out the old DHCP setting. Edit I set everything back to original settings and set up a DHCP server. "starting" networking does the same thing. I can only assume this is normal, I just don't know WHY. It can't be anything to do with the configuration files, since they've been restored.

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • Can't connect to DeploymentShare$ from PC attempting to MDT, but can other PCs on the network

    - by Moman10
    I am in the process of setting up MDT and have run across a problem. MDT is installed on a Windows 2012 server, MDT version 6.2.5019.0. Using WDS as well. Active Directory domain, the server is up to date and on the network. I boot up the PC, it gets an address from DHCP, pulls down the LiteTouchPE_x64.wim image and goes into the MS Solution Accelerators screen, the Processing Bootstrap Settings box comes up and processes for a couple of seconds, then goes away, it sits there for another minute or so and then gives the error: A connection to the deployment share (\\Acme-MDT\DeploymentShare$) could not be made. Can not reach the DeployRoot. Possible Cause: Network Routing error or Network Configuration Error." I can then retry or cancel. I have seen this error online but so far nothing that helps fix it, but seems to be an issue with the FQDN. I verified that I am getting an IP address and that I can successfully ping the MDT server if I use the FQDN, but can not just by it's A record of Acme-MDT. I tried manually mapping the network share using net use and it works if I use the FQDN, but it fails with an error code 53, "Network path not found" if I just use the A record of Acme-MDT. Here is the net use command I'm using: net use * \\Acme-MDT\DeploymentShare$ /u:Domain\Administrator It gives the error System Error 53, Network path not found (and doesn't prompt for a password), but if I use the FQDN of \\Acme-MDT.domain.com\DeploymentShare$ it works fine to map the drive. I guess the problem is, when it tries to load the image, it is trying to start from \\Acme-MDT\DeploymentShare$ and I need it to start from \\Acme-MDT.domain.com\DeploymentShare$, but not sure how to get it to do that. I've put the fully qualified path in CustomSettings.ini and bootstrap, updated the deployment share, regenerated the boot image and replaced the boot wim in WDS. Or, if someone has an idea as to why it's acting this way and knows a way around it. The end result is what matters! :) I did verify in DNS that Acme-MDT is there, with the proper IP, and I can successfully use the net use command to map this drive from a couple other computers that are already on the network. I am assuming it has something to do with that computer not already being part of the domain, but I'm honestly at a loss as to how to fix it. Any ideas are appreciated, thanks in advance for your help!

    Read the article

  • Graphite SQLite3 DatabaseError: attempt to write a readonly database

    - by Anadi Misra
    Running graphite under apache httpd, with slqite database, I have the correct folder permissions [root@liaan55 httpd]# ls -ltr /var/lib | grep graphite drwxr-xr-x. 2 apache apache 4096 Aug 23 19:36 graphite-web and [root@liaan55 httpd]# ls -ltr /var/lib/graphite-web/ total 68 -rw-r--r--. 1 apache apache 65536 Aug 23 19:46 graphite.db syncdb also seems to have gone fine [root@liaan55 httpd]# sudo -su apache bash-4.1$ whoami apache bash-4.1$ python /usr/lib/python2.6/site-packages/graphite/manage.py syncdb /usr/lib/python2.6/site-packages/graphite/settings.py:231: UserWarning: SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security warn('SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security') /usr/lib/python2.6/site-packages/django/conf/__init__.py:75: DeprecationWarning: The ADMIN_MEDIA_PREFIX setting has been removed; use STATIC_URL instead. "use STATIC_URL instead.", DeprecationWarning) /usr/lib/python2.6/site-packages/django/core/cache/__init__.py:82: DeprecationWarning: settings.CACHE_* is deprecated; use settings.CACHES instead. DeprecationWarning Creating tables ... Creating table account_profile Creating table account_variable Creating table account_view Creating table account_window Creating table account_mygraph Creating table dashboard_dashboard_owners Creating table dashboard_dashboard Creating table events_event Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table django_session Creating table django_admin_log Creating table django_content_type Creating table tagging_tag Creating table tagging_taggeditem You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'apache'): root E-mail address: [email protected] Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) bash-4.1$ exit and the local-settings.py file is as follows STORAGE_DIR = '/var/lib/graphite-web' INDEX_FILE = '/var/lib/graphite-web/index' DATABASES = { 'default': { 'NAME': '/var/lib/graphite-web/graphite.db', 'ENGINE': 'django.db.backends.sqlite3', 'USER': '', 'PASSWORD': '', 'HOST': '', 'PORT': '' } } I still get this error [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] File "/usr/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 344, in execute [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] return Database.Cursor.execute(self, query, params) [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] DatabaseError: attempt to write a readonly database not sure what is missing in this configuration

    Read the article

  • GRUB 2 freezing at OS selection screen, what could be the cause?

    - by Michael Kjörling
    Mains power is somewhat unreliable where I live, so every now and then, the computer gets rebooted when the PSU can't maintain proper voltage during a brown-out or momentary black-out. It's happened a few times recently that when power is restored, the BIOS POST completes successfully, GRUB starts to load and then freezes. I've seen this at the Welcome to GRUB! message, but it seems to happen more often just past the switch to the graphical OS list. At this point, the computer will not respond to anything (arrow keys, control commands, Ctrl+Alt+Del, ...) - it simply sits there displaying this image, seemingly doing nothing more. At that point, turning the computer off using the power button and letting it sit for a while (cooling down?) has allowed it to boot successfully. Turning the computer off and immediately back on seems to give the same result (successful POST then freeze in GRUB). This behavior began recently, although does not seem to be directly correlated with my hard disk woes (although it may be relevant that GRUB resides on that physical disk, I don't know). Once the computer has booted, it runs without a hitch. I know that a "proper" solution would be to invest in a UPS, but what might be causing behavior like this? I was thinking in terms of perhaps the CPU shutting down as a thermal control measure, but if that was the cause then wouldn't I see similar freezes during use (which I do not)? What else could cause freezes apparently closely but not perfectly related to the BIOS handover from POST to OS bootloader? The BIOS settings are to reset to previous power status after a power loss. Since the PC in question is almost always turned on, this means restore to full power status. I have no expansion cards installed that make any BIOS extensions known by screen output during the boot process, at least, but I do have a few expansion cards installed. Haven't made any changes in that regard in a long time, now. I haven't touched GRUB itself for a long time, whether configuration or binaries, so I don't think that's the problem. Also, it doesn't really make sense that a bug in GRUB would manifest itself only once in a blue moon but significantly more often after a power failure.

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • UNIX - mount: only root can do that

    - by Travesty3
    I need to allow a non-root user to mount/unmount a device. I am a total noob when it comes to UNIX, so please dumb it down for me. I've been looking all over teh interwebz to find an answer and it seems everyone is giving the same one, which is to modify /etc/fstab to include that device with the 'user' option (or 'users', tried both). Cool, well I did that and it still says "mount: only root can do that". Here are the contents of my fstab: # /etc/fstab: static file system information. # # Use 'vol_id --uuid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 # / was on /dev/mapper/minicc-root during installation UUID=1a69f02a-a049-4411-8c57-ff4ebd8bb933 / ext3 relatime,errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=038498fe-1267-44c4-8788-e1354d71faf5 /boot ext2 relatime 0 2 # swap was on /dev/mapper/minicc-swap_1 during installation UUID=0bb583aa-84a8-43ef-98c4-c6cb25d20715 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/scd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sdb1 /mnt/sdcard auto auto,user,rw,exec 0 0 My thumb drive partition shows up as /dev/sdb1. I'm pretty sure my fstab is set up OK, but everyone on the other posts seems to fail to mention how they actually call the 'mount' command once this entry is in the fstab file. I think this is where my problem may be. The command I use to mount the drive is: $ mount /dev/sdb1 /mnt/sdcard. /bin/mount is owned by root and is in the root group and has 4755 permissions. /bin/umount is owned by root and is in the root group and has 4755 permissions. /mnt/sdcard is owned by me and is in one of my groups and has 0755 permissions. My mount command works fine if I use sudo, but I need to be able to do this without sudo (need to be able to do it from a PHP script using shell_exec). Any suggestions? Sorry for making you read so much...just trying to get as much info in the initial post as possible to preemptively answer questions about configuration stuff. If I missed anything tho, ask away. Thanks! -Travis

    Read the article

  • Outlook Anywhere inconsistencies with authentication methods

    - by gravyface
    So I've read this question and attempted just about every other workaround I've found online. Problem seems completely illogical to me, anyways: SBS 2011, vanilla install; haven't touched anything in IIS or Exchange outside of what's been done through the checklist (brand new domain, completely new customer) except to import an existing wildcard certificate for *.example.com (which is valid, Remote Web Workplace and Outlook Web Access work fine). On the two test machines and one production machine running a mixture of Windows XP Pro, Windows 7 and Outlook 2003 through to 2010, I've had no problem saving the password after configuring Outlook Anywhere using the wrong authentication method. I repeat, I have had no issues using the wrong authentication method on these test machines; password saves the first time, no problem, can verify it exists in the credentials manager (Start Run control userpasswords2), close Outlook, reboot, go make a sammie, come back, credentials are still saved. When I say wrong, it's because I was choosing NTLM and Exchange (under Exchange Console Server Configuration Client Access) was set by default to use Basic. On two completely different machines setup by a co-worker, they had (under my guidance) used NTLM as well... except that frustratingly, Outlook would always ask for a password. One machine was Windows XP with Outlook 2010, the other was Windows 7 with Outlook 2003. When these two machines were set to use Basic -- the correct settings -- the option to save was there and now works without issue. Puzzled by how my machines could possibly work with the wrong authentication, I then went into one of them and changed the authentication method to Basic. Now here's where it gets a little crazy: if I go under Outlook and change the authentication to use the correct setting (Basic) it fails to save the password and Outlook prompts every time (without a "remember me" checkbox). I have not had a chance to change it to Basic on the other two machines to see if this is just a fluke or not, but something just isn't right here. My two hunches are either a missing/installed KB Update or perhaps a local security policy. I should add that none of the 5 test machines in the equation here have ever been joined to the domain.

    Read the article

  • kernel panic after LVM setup

    - by Manuel Sopena Ballesteros
    I broke my webserver... My setup is: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system This was my previous storage settings (the server was working fine at this time): # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 95G 1.4G 88G 2% / tmpfs 939M 0 939M 0% /dev/shm /dev/sdb1 99G 188M 94G 1% /tmp /dev/sda1 485M 54M 407M 12% /boot My web developer asked me to merge /tmp and / disks so this is what I did: Delete /dev/sdb1 partition using fdisk Create a new partition as LVM on /dev/sdb1 using fdisk Create a new physical volume -- pvcreate /dev/sdb1 Extend volume group -- vgextend /dev/sdb1 vg_test01 Extend logical volume -- lvextend -l +100%FREE /dev/vg_test01/lv_root Resize filesystem -- resize2fs /dev/vg_test01/lv_root This is the new configuration: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 213G 105G 97G 52% / tmpfs 939M 0 939M 0% /dev/shm /dev/sda1 485M 54M 407M 12% /boot /usr/tmpDSK 4.0G 145M 3.6G 4% /tmp Since I have the new settings my web server is throwing kernel panics quite often (around every 2 days). The message says: INFO: task <taskName>:<pid> blocked for more than 120 seconds. The list of process affected that I can see from the console are: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd The only way I can fix this is reseting (cold reboot) the VM. I don't think it is a hardware issue as sar is not showing any bottleneck: Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 07:50:01 AM all 1.13 0.02 0.55 0.11 0.00 98.19 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to bring the website up again. I would appreciate any help/assistance to fix this issue. thank you very much

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - B1 = 192.168.109.15 B2 = 172.24.40.130 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Update after response from Eddie Few problems (and Machines' B IP is different!) From A :- ssh 172.24.40.130 works ok, (can get to B2) but ssh 172.24.40.130 -p 2022 -vv times out with :- OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.24.40.130 [172.24.40.130] port 2022. ...wait ages... debug1: connect to address 172.24.40.130 port 2022: Connection timed out ssh: connect to host 172.24.40.130 port 2022: Connection timed out From B2 :- $ service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 192.168.0.2 tcp dpt:22 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2022 to:192.168.0.2:22 Chain POSTROUTING (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination And ssh from B2 to C works fine :- $ ssh 192.168.0.2 Route info :- $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 * 255.255.255.0 U 0 0 0 eth1 172.24.40.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth1 default 172.24.40.1 0.0.0.0 UG 0 0 0 eth0 $ ip route 192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.1 172.24.40.0/24 dev eth0 proto kernel scope link src 172.24.40.130 169.254.0.0/16 dev eth1 scope link default via 172.24.40.1 dev eth0 So I just dont know why the port forward doesnt work from A to B2?

    Read the article

  • ffmpeg hangs when creating a video

    - by FearUs
    I am trying to insert an audio channel with a video: first of all I extract the audio from the original video for processing: ffmpeg -i lotr.mp4 lotr.wav I then extract all frames for later processing too: ffmpeg -i lotr.mp4 -f image2 %d.jpg When done processing audio and video streams, I try to create the video ffmpeg -f image2 -r 15 -i %d.jpg new.mp4 then merge with the audio: ffmpeg -i new.mp4 -i lotr.wav -map 0:0 -map 1:0 new_w_audio.mp4 Result: CPU activity = 100%, the process hangs and never returns. PS: I even tried it without modifying the images or the audio (so just trying to unpack then repack the video) but still the same output FFmpeg version SVN-r26400, Copyright (c) 2000-2011 the FFmpeg developers built on Jan 18 2011 04:07:05 with gcc 4.4.2 configuration: --enable-gpl --enable-version3 --enable-libgsm --enable-libvorb is --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libschroedinger --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-libvpx --disable-decoder=libvpx --arch=x86 --enable-runtime-cpudetect - -enable-libxvid --enable-libx264 --enable-librtmp --extra-libs='-lrtmp -lpolarss l -lws2_32 -lwinmm' --target-os=mingw32 --enable-avisynth --enable-w32threads -- cross-prefix=i686-mingw32- --cc='ccache i686-mingw32-gcc' --enable-memalign-hack libavutil 50.36. 0 / 50.36. 0 libavcore 0.16. 1 / 0.16. 1 libavcodec 52.108. 0 / 52.108. 0 libavformat 52.93. 0 / 52.93. 0 libavdevice 52. 2. 3 / 52. 2. 3 libavfilter 1.74. 0 / 1.74. 0 libswscale 0.12. 0 / 0.12. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'new.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Duration: 00:00:29.66, start: 0.000000, bitrate: 193 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], 192 k b/s, 15 fps, 15 tbr, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 [wav @ 01fed010] max_analyze_duration reached Input #1, wav, from 'lotr.wav': Duration: 00:00:29.90, bitrate: 176 kb/s Stream #1.0: Audio: pcm_s16le, 11025 Hz, 1 channels, s16, 176 kb/s File 'new_w_audio.mp4' already exists. Overwrite ? [y/N] y [buffer @ 01b03820] w:200 h:134 pixfmt:yuv420p Output #0, mp4, to 'new_w_audio.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], q=2-3 1, 200 kb/s, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1: Audio: aac, 11025 Hz, 1 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding

    Read the article

  • TPROXY Not working with HAProxy, Ubuntu 14.04

    - by Nyxynyx
    I'm trying to use HAProxy as a fully transparent proxy using TPROXY in Ubuntu 14.04. HAProxy will be setup on the first server with eth1 111.111.250.250 and eth0 10.111.128.134. The single balanced server has eth1 and eth0 as well. eth1 is the public facing network interface while eth0 is for the private network which both servers are in. Problem: I'm able to connect to the balanced server's port 1234 directly (via eth1) but am not able to reach the balanced server via Haproxy port 1234 (which redirects to 1234 via eth0). Am I missing out something in this configuration? On the HAProxy server The current kernel is: Linux extremehash-lb2 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux The kernel appears to have TPROXY support: # grep TPROXY /boot/config-3.13.0-24-generic CONFIG_NETFILTER_XT_TARGET_TPROXY=m HAProxy was compiled with TPROXY support: haproxy -vv HA-Proxy version 1.5.3 2014/07/25 Copyright 2000-2014 Willy Tarreau <[email protected]> Build options : TARGET = linux26 CPU = x86_64 CC = gcc CFLAGS = -g -fno-strict-aliasing OPTIONS = USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 USE_STATIC_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built without zlib support (USE_ZLIB not set) Compression algorithms supported : identity Built without OpenSSL support (USE_OPENSSL not set) Built with PCRE version : 8.31 2012-07-06 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. In /etc/haproxy/haproxy.cfg, I've configured a port to have the following options: listen test1235 :1234 mode tcp option tcplog balance leastconn source 0.0.0.0 usesrc clientip server balanced1 10.111.163.76:1234 check inter 5s rise 2 fall 4 weight 4 On the balanced server In /etc/networking/interfaces I've set the gateway for eth0 to be the HAProxy box 10.111.128.134 and restarted networking. auto eth0 eth1 iface eth0 inet static address 111.111.250.250 netmask 255.255.224.0 gateway 111.131.224.1 dns-nameservers 8.8.4.4 8.8.8.8 209.244.0.3 iface eth1 inet static address 10.111.163.76 netmask 255.255.0.0 gateway 10.111.128.134 ip route gives: default via 111.111.224.1 dev eth0 10.111.0.0/16 dev eth1 proto kernel scope link src 10.111.163.76 111.111.224.0/19 dev eth0 proto kernel scope link src 111.111.250.250

    Read the article

  • NTPD seems to delete all network interfaces

    - by Aurelin
    We have a couple of virtual interfaces configured on eth0 on a CentOS, and every now and then, they went down seemingly out of the blue. Now after going through the log files, I found out that apparently ntpd deletes all eth0 interfaces, and that dhclient automatically brings eth0 back up. The virtual interfaces, however, stay down which causes several of our websites to be inaccessible. Can someone explain to me why ntpd deletes interfaces? Can / should that be turned off, or can / should I configure dhclient to bring the virtual interfaces back up automatically, too? EDIT// The log files that I should've posted : Nov 12 13:10:28 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6a825e97) Nov 12 13:10:42 raptor dhclient[20048]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPOFFER from 96.126.108.78 Nov 12 13:10:42 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPACK from 96.126.108.78 (xid=0x24554092) Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #31 eth0, 50.116.50.97#123, interface stats: received=3255, sent=3256, dropped=0, active_time=1559394 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #32 eth0:0, 50.116.53.56#123, interface stats: received=3, sent=0, dropped=0, active_time=1559391 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #33 eth0:1, 66.175.211.192#123, interface stats: received=2, sent=0, dropped=0, active_time=1559389 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #34 eth0:2, 50.116.53.95#123, interface stats: received=3, sent=0, dropped=0, active_time=1559387 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #35 eth0:3, 97.107.132.32#123, interface stats: received=2, sent=0, dropped=0, active_time=1559385 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #36 eth0:4, 50.116.56.201#123, interface stats: received=2, sent=0, dropped=0, active_time=1559383 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #37 eth0:5, 66.175.212.121#123, interface stats: received=2, sent=0, dropped=0, active_time=1559381 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #38 eth0:6, 66.175.215.137#123, interface stats: received=2, sent=0, dropped=0, active_time=1559379 secs Nov 12 13:10:44 raptor NET[1573]: /sbin/dhclient-script : updated /etc/resolv.conf Nov 12 13:10:44 raptor dhclient[20048]: bound to 50.116.50.97 -- renewal in 32692 seconds. Nov 12 13:10:45 raptor ntpd[2109]: Listening on interface #39 eth0, 50.116.50.97#123 Enabled The eth0 config : DEVICE="eth0" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="no" IPADDR=50.116.50.97 NETMASK=255.255.255.0 GATEWAY=50.116.50.1 And the virtual interfaces (I posted the first one only, they look the same for the most part) : # Configuration for eth0:0 DEVICE=eth0:0 BOOTPROTO=none # This line ensures that the interface will be brought up during boot. ONBOOT=yes # eth0:0 IPADDR=50.116.53.56 NETMASK=255.255.255.0

    Read the article

  • Connectivity issues with dual NIC machine in EC2

    - by Matt Sieker
    I'm trying to get some servers set up in EC2 in a Virtual Private Cloud. To do this, I have two subnets: 10.0.42.0/24 - Public subnet 10.0.83.0/24 - Private subnet To bridge these two, I have a Funtoo instance with a pair of NICs: eth0 10.0.42.10 eth1 10.0.83.10 Which has the following routing table: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.83.0 * 255.255.255.0 U 0 0 0 eth1 10.0.83.0 * 255.255.255.0 U 203 0 0 eth1 10.0.42.0 * 255.255.255.0 U 202 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default 10.0.42.1 0.0.0.0 UG 0 0 0 eth0 default 10.0.42.1 0.0.0.0 UG 202 0 0 eth0 An elastic IP is attached to the eth0 interface, and I can connect to it fine remotely. However, I cannot ping anything in the 10.0.83.0 subnet. For now iptables is not set up on the box, so there's no rules that would get in the way (Eventually this will be managed by Shorewall, but I should get basic connectivity done first) Subnet details from the VPC interface: CIDR: 10.0.83.0/24 Destination Target 10.0.0.0/16 local 0.0.0.0/0 [ID of eth1 on NAT box] Network ACL: Default Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY   CIDR: 10.0.83.0/24 VPC: Destination Target 10.0.0.0/16 local 0.0.0.0/0 [Internet Gateway ID] Network ACL: Default (replace) Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY I've been trying to work this out most of the evening, but I'm just stuck. I'm either missing something obvious, or am doing something very wrong. I would think I'd be able to ping from either interface on this box without issue. Hopefully some more pairs of eyes on this configuration will help. EDIT: I am an idiot. After I bothered to install nmap to run some more tests, I discover I can see the ports, and connect to them, pings are just being blocked.

    Read the article

  • setting up a basic mod_proxy virtual host

    - by SevenProxies
    I'm trying to set up a basic virtual host to proxy all requests to test.local to a WEBrick server I have running on 127.0.0.1:8080 while keeping all requests to localhost going to my static files in /var/www. I'm running Ubuntu 10.04. I have libapache2-mod-proxy-html installed and I have the module enabled with a2enmod proxy. I also have my virtual host enabled. However, whenever I go to test.local I always get a cryptic 500 server error and all my logs are telling me is: [Thu Mar 03 01:43:10 2011] [warn] proxy: No protocol handler was valid for the URL /. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. Here's my virtual host: <VirtualHost test.local:80> LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so ServerAdmin webmaster@localhost ServerName test.local ProxyPreserveHost On # prevents this folder from being proxied ProxyPass /static ! DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined and here's my settings for mod_proxy: <IfModule mod_proxy.c> #turning ProxyRequests on and allowing proxying from all may allow #spammers to use your proxy to send email. ProxyRequests Off <Proxy *> # default settings #AddDefaultCharset off #Order deny,allow #Deny from all ##Allow from .example.com AddDefaultCharset off Order allow,deny Allow from all </Proxy> # Enable/disable the handling of HTTP/1.1 "Via:" headers. # ("Full" adds the server version; "Block" removes all outgoing Via: headers) # Set to one of: Off | On | Full | Block ProxyVia On </IfModule> Does anybody know what I'm doing wrong? Thanks

    Read the article

  • Apache2 & .htaccess : Apache ignoring AccessFile

    - by Elyx0
    Hi there here is my server configuration: DEBIAN 32Bits / PHP 5 / Apache Server version: Apache/2.2.3 - Server built: Mar 22 2008 09:29:10 The AccessFiles : grep -ni AccessFileName * apache2.conf:134:AccessFileName .htaccess apache2.conf:667:AccessFileName .httpdoverride All the AllowOverride statements in my apache2/ folder. mods-available/userdir.conf:6: AllowOverride Indexes AuthConfig Limit mods-available/userdir.conf:16: AllowOverride FileInfo AuthConfig Limit mods-enabled/userdir.conf:6: AllowOverride Indexes AuthConfig Limit mods-enabled/userdir.conf:16: AllowOverride FileInfo AuthConfig Limit sites-enabled/default:8: AllowOverride All sites-enabled/default:14: AllowOverride All sites-enabled/default:19: AllowOverride All sites-enabled/default:24: AllowOverride All sites-enabled/default:42: AllowOverride All The sites-enabled/default file : 1 <VirtualHost *> 2 ServerAdmin [email protected] 3 ServerName mysite.com 4 ServerAlias mysite.com 5 DocumentRoot /var/www/mysite.com/ 6 <Directory /> 7 Options FollowSymLinks 8 AllowOverride All 9 Order Deny,Allow 10 Deny from all 11 </Directory> 12 <Directory /var/www/mysite.com/> 13 Options Indexes FollowSymLinks MultiViews 14 AllowOverride All 15 Order allow,deny 16 allow from all 17 </Directory> 18 <Directory /var/www/mysite.com/test/> 19 AllowOverride All 20 </Directory> 21 22 ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ 23 <Directory "/usr/lib/cgi-bin"> 24 AllowOverride All 25 Options ExecCGI -MultiViews +SymLinksIfOwnerMatch 26 Order allow,deny 27 Allow from all 28 </Directory> 29 30 ErrorLog /var/log/apache2/error.log 31 32 # Possible values include: debug, info, notice, warn, error, crit, 33 # alert, emerg. 34 LogLevel warn 35 36 CustomLog /var/log/apache2/access.log combined 37 ServerSignature Off 38 39 Alias /doc/ "/usr/share/doc/" 40 <Directory "/usr/share/doc/"> 41 Options Indexes MultiViews FollowSymLinks 42 AllowOverride All 43 Order deny,allow 44 Deny from all 45 Allow from 127.0.0.0/255.0.0.0 ::1/128 46 </Directory> 47 48 49 50 51 52 53 54 </VirtualHost> If i change any Allow from all in Deny from all , it works whenever i put it. I've got one .htaccess at /mysite.com/.htaccess & one at /mysite.com/test/.htaccess with: Order Deny,Allow Deny from all Neither of them work i can still see my website. I've got mod_rewrite enabled but i don't think it does anything here. I've tried almost everything :/ It works on my local environnement (MAMP) but fails when on my Debian server.

    Read the article

  • Trac problem: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule"

    - by Janosch
    This problem already has been described mutliple times in different mailing lists, but no solution has yet been published. My original setup is as follows (but in the mean time i have a simpler one on Windows 7): Ubuntu server with apache 2.2 and python 2.7 virutal python environment created with virtualenv installed babel, genshi and trac in this order using pip in the virtual environment Trac seems to run fine with tracd, but when visiting it through apache, i get the following error in an official trac error page: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule" The stacktrace looks like this: Traceback (most recent call last): File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py",line 473, in _dispatch_request dispatcher.dispatch(req) File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py", line 154, in dispatch chosen_handler = self.default_handler File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/config.py", line 691, in __get__ self.section, self.name)) AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule". Please update the option trac.default_handler in trac.ini. I already tried a lot to get down to the root of the problem, for me it looks as if all nativ trac components refuse to load. When one explicitely imports these components in the wsgi handler, some of them start to work somehow. Since i suspected the virtual environment, i dropped it, and manually copied all dependencies (babel, genshi, trac, ..) to one directory, and added this directory to system.path in the wsgi handler. I get exactly the same error. Since this setup is now independend from the environment, one can easily try it out on any other machine (windows or linux) running apache 2 and python 2.7. On my Windwos 7 machine, i got exactly the same problem. I zipped together the whole bundle, one can download it from http://www.xterity.de/tmp/trac-installation.zip . In the apache configuration (Windows 7 machine) i use the following settings: Alias /trac/chrome/common "D:/workspace/trac-installation/trac-resources/common" Alias /trac/chrome/site "D:/workspace/trac-installation/trac-resources/site" WSGIScriptAlias /trac "D:/workspace/trac-installation/apache/handler.wsgi" <Directory "D:/workspace/trac-installation/trac-resources"> Order allow,deny Allow from all </Directory> <Directory "D:/workspace/trac-installation"> Order allow,deny Allow from all </Directory> <Location "/trac"> Order allow,deny Allow from all </Location> And my handler.wsgi looks like this: import os import sys sys.path.append('D:/workspace/trac-installation/dependencies/') os.environ['TRAC_ENV'] = 'D:/workspace/trac-installation/trac-environments/Esp004' os.environ['PYTHON_EGG_CACHE'] = 'D:/workspace/trac-installation/eggs' import trac.web.main application = trac.web.main.dispatch_request Has anybody got an idea what could be the problem, or how to find out where it comes from?

    Read the article

  • Authenticating Apache HTTPd against multiple LDAP servers with expired accounts

    - by Brian Bassett
    We're using mod_authnz_ldap and mod_authn_alias in Apache 2.2.9 (as shipped in Debian 5.0, 2.2.9-10+lenny7) to authenticate against multiple Active Directory domains for hosting a Subversion repository. Our current configuration is: # Turn up logging LogLevel debug # Define authentication providers <AuthnProviderAlias ldap alpha> AuthLDAPBindDN "CN=Subversion,OU=Service Accounts,O=Alpha" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://dc01.alpha:3268/?sAMAccountName?sub? </AuthnProviderAlias> <AuthnProviderAlias ldap beta> AuthLDAPBindDN "CN=LDAPAuth,OU=Service Accounts,O=Beta" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://ldap.beta:3268/?sAMAccountName?sub? </AuthnProviderAlias> # Subversion Repository <Location /svn> DAV svn SVNPath /opt/svn/repo AuthName "Subversion" AuthType Basic AuthBasicProvider alpha beta AuthzLDAPAuthoritative off AuthzSVNAccessFile /opt/svn/authz require valid-user </Location> We're encountering issues with users that have accounts in both Alpha and Beta, especially when their accounts in Alpha are expired (but still present; company policy is that the accounts live on for at a minimum of 1 year). For example, when the user x (which has en expired account in Alpha, and a valid account in Beta), the Apache error log reports the following: [Tue May 11 13:42:07 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14817] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:08 2010] [warn] [client 10.1.1.104] [14817] auth_ldap authenticate: user x authentication failed; URI /svn/ [ldap_simple_bind_s() to check user credentials failed][Invalid credentials] [Tue May 11 13:42:08 2010] [error] [client 10.1.1.104] user x: authentication failure for "/svn/": Password Mismatch [Tue May 11 13:42:08 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ Attempting to authenticate as a non-existant user (nobodycool) results in the correct behavior of querying both LDAP servers: [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:40 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://ldap.beta:3268/?sAMAccountName?sub? [Tue May 11 13:42:44 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:44 2010] [error] [client 10.1.1.104] user nobodycool not found: /svn/ [Tue May 11 13:42:44 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ How do I configure Apache to correctly query Beta if it encounters an expired account in Alpha?

    Read the article

  • NRPE Warning threshold must be a positive integer

    - by Frida
    OS: Ubuntu 12.10 Server 64bits I've installed Icinga, with ido2db, pnp4nagios and icinga-web (last release, following the instruction given in the documentation, installation with apt, etc). I am using icinga-web to monitor my hosts. For the moment, I have just my localhost, and all is perfect. I am trying to add a host and monitor it with NRPE (version 2.12): root@server:/etc/icinga# /usr/lib/nagios/plugins/check_nrpe -H client NRPE v2.12 The configuration looks good. I've created a file in /etc/icinga/objects/client.cfg as below on the server: root@server:/etc/icinga/objects# cat client.cfg define host{ use generic-host ; Name of host template to use host_name client alias client.toto address xx.xx.xx.xx } # Service Definitions define service{ use generic-service host_name client service_description CPU Load check_command check_nrpe_1arg!check_load } define service{ use generic-service host_name client service_description Number of Users check_command check_nrpe_1arg!check_users } And add in my /etc/icinga/commands.cfg: # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ -a $ARG2$ } # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe_1arg command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ } But it does not work. These are the logs from the client: Dec 3 19:45:12 client nrpe[604]: Connection from xx.xx.xx.xx port 32641 Dec 3 19:45:12 client nrpe[604]: Host address is in allowed_hosts Dec 3 19:45:12 client nrpe[604]: Handling the connection... Dec 3 19:45:12 client nrpe[604]: Host is asking for command 'check_users' to be run... Dec 3 19:45:12 client nrpe[604]: Running command: /usr/lib/nagios/plugins/check_users -w -c Dec 3 19:45:12 client nrpe[604]: Command completed with return code 3 and output: check_users: Warning t hreshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:45:12 client nrpe[604]: Return Code: 3, Output: check_users: Warning threshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx port 32129 Dec 3 19:44:49 client nrpe[32582]: Host address is in allowed_hosts Dec 3 19:44:49 client nrpe[32582]: Handling the connection... Dec 3 19:44:49 client nrpe[32582]: Host is asking for command 'check_load' to be run... Dec 3 19:44:49 client nrpe[32582]: Running command: /usr/lib/nagios/plugins/check_load -w -c Dec 3 19:44:49 client nrpe[32582]: Command completed with return code 3 and output: Warning threshold mu st be float or float triplet!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLO AD15 Dec 3 19:44:49 client nrpe[32582]: Return Code: 3, Output: Warning threshold must be float or float trip let!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLOAD15 Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx closed. Have you any ideas?

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >