Search Results

Search found 18126 results on 726 pages for 'core pro'.

Page 673/726 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • How do I determine if my controller is in IDE or AHCI mode in Linux?

    - by philcolbourn
    I have an old MacBook Pro 4,1 (early 2008) - but I suspect an answer would apply to many MacBook Pros. It has an Intel IDE/SATA controller (ICH8M/ICH8M-E). I have installed a patched MBR that is supposed to put my controller into AHCI mode. It does this by setting some controller port value that I don't understand. This seems to work as I get this from lspci: 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) Now most, perhaps all, sites that provide a solution (enabling AHCI) suggest that after a sleep/wake cycle that a controller will revert to IDE mode due to how Apple support Windows. They recommend disabling sleep. From author of patchedcode.bin I think Enabling AHCI for Windows on MacBooks NB: I do not have bootcamp installed and I do not have Windows installed. Is there a way to prove that my controller is in IDE or AHCI mode? Background Data Using patchedcode.bin MBR I get this in syslog: Jun 12 22:33:22 max kernel: [ 1.860955] ahci 0000:00:1f.2: version 3.0 Jun 12 22:33:22 max kernel: [ 1.861052] ahci 0000:00:1f.2: irq 45 for MSI/MSI-X Jun 12 22:33:22 max kernel: [ 1.861117] ahci 0000:00:1f.2: AHCI 0001.0100 32 slots 3 ports 1.5 Gbps 0x1 impl SATA mode Jun 12 22:33:22 max kernel: [ 1.861120] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ccc ems Jun 12 22:33:22 max kernel: [ 1.861130] ahci 0000:00:1f.2: setting latency timer to 64 Jun 12 22:33:22 max kernel: [ 1.880880] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) Jun 12 22:33:22 max kernel: [ 1.880983] scsi2 : ahci Jun 12 22:33:22 max kernel: [ 1.884552] scsi3 : ahci Jun 12 22:33:22 max kernel: [ 1.886932] scsi4 : ahci Jun 12 22:33:22 max kernel: [ 1.886998] ata3: SATA max UDMA/133 abar m2048@0xdb504000 port 0xdb504100 irq 45 Jun 12 22:33:22 max kernel: [ 1.887000] ata4: DUMMY Jun 12 22:33:22 max kernel: [ 1.887002] ata5: DUMMY Jun 12 22:33:22 max kernel: [ 2.204103] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 12 22:33:22 max kernel: [ 2.204656] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 12 22:33:22 max kernel: [ 2.204662] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 12 22:33:22 max kernel: [ 2.205324] ata3.00: configured for UDMA/100 Jun 12 22:33:22 max kernel: [ 2.205554] scsi 2:0:0:0: Direct-Access ATA FUJITSU MHY2200B 0081 PQ: 0 ANSI: 5 Using my original MBR I get this from syslog: Jun 13 18:07:13 max kernel: [ 0.622861] ata_piix 0000:00:1f.1: version 2.13 Jun 13 18:07:13 max kernel: [ 0.622869] ata_piix 0000:00:1f.1: power state changed by ACPI to D0 Jun 13 18:07:13 max kernel: [ 0.622924] ata_piix 0000:00:1f.1: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.623339] scsi0 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623730] scsi1 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623765] ata1: PATA max UDMA/100 cmd 0x8108 ctl 0x811c bmdma 0x80e0 irq 21 Jun 13 18:07:13 max kernel: [ 0.623767] ata2: PATA max UDMA/100 cmd 0x8100 ctl 0x8118 bmdma 0x80e8 irq 21 Jun 13 18:07:13 max kernel: [ 0.623810] ata_piix 0000:00:1f.2: MAP [ Jun 13 18:07:13 max kernel: [ 0.623811] P0 -- -- -- ] Jun 13 18:07:13 max kernel: [ 0.623866] ata_piix 0000:00:1f.2: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.624241] scsi2 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624558] scsi3 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624862] ata3: SATA max UDMA/133 cmd 0x80f8 ctl 0x8114 bmdma 0x8020 irq 18 Jun 13 18:07:13 max kernel: [ 0.624865] ata4: SATA max UDMA/133 cmd 0x80f0 ctl 0x8110 bmdma 0x8028 irq 18 Jun 13 18:07:13 max kernel: [ 1.208879] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 13 18:07:13 max kernel: [ 1.208882] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 0/32) Jun 13 18:07:13 max kernel: [ 1.208961] ata1.01: ATAPI: MATSHITA DVD+/-RW UJ-867S, 1.00, max UDMA/33 Jun 13 18:07:13 max kernel: [ 1.216186] ata3.00: configured for UDMA/100 Jun 13 18:07:13 max kernel: [ 1.224396] ata1.01: configured for UDMA/33

    Read the article

  • Trying to use Digest Authentication for Folder Protection

    - by Jon Hazlett
    StackOverflow users suggested I try my question here. I'm using Server 2008 EE and IIS 7. I've got a site that I've migrated over from XP Pro using IIS 5. On the old system, I was using IIS Password to use simple .htaccess files to control a couple of folders that I didn't want to be publicly viewable. Now that I'm running a full-blown DC with a more powerful version of IIS, I decided it'd be a good idea to start using something slightly more sophisticated. After doing my research and trying to keep things as cheap as possible with a touch of extra security, I decided that Digest Authentication would be the best way to go. My issue is this: With Anon access disabled and Digest enabled, I am never prompted for credentials. when on the server, viewing domain[dot]com/example will simply show my 401.htm page without prompting me for credentials. when on a different network/computer, viewing domain[dot]com/example again shows my 401.htm without prompting for credentials. At the site level I only have Anon enabled. Every subfolder, unless I want it protected, has just Anon enabled. Only the folders I want protected have Anon disabled and Digest enabled. I have tried editing the bindings to see if that would spark any kind of change... www.domain.com, domain.com, and localhost have all been tried. There was never a change in behavior at any permutation (aside from the page not being found when I un-bound localhost to the site). I might have screwed up when I deleted the default site from IIS. I didn't think I'd actually need it for anything, but some of what I have read online is telling me otherwise now. As for Digest settings, I have it pointed to local.domain.com, which is the name assigned to my AD Domain. I'm guessing that's right, but honestly have no clue about what a realm actually is. Would it matter that I have an A record for local.domain.com pointing to my IP address? I had problems initially with an absolute link for 401.htm pages, but have since resolved that. Instead of D:\HTTP\401.htm I've used /401.htm and all is well. I used to get error 500's because it couldn't find the custom 401.htm file, but now it loads just fine. As for some data, I was getting entries like this from access logs: 2009-07-10 17:34:12 10.0.0.10 GET /example/ - 80 - [workip] Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+1.1.4322;+.NET+CLR+2.0.50727;+InfoPath.2) 401 2 5 132 But after correcting my 401.htm links now get logs like this: 2009-07-10 18:56:25 10.0.0.10 GET /example - 80 - [workip] Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US;+rv:1.9.0.11)+Gecko/2009060215+Firefox/3.0.11 200 0 0 146 I don't know if that means anything or not. I still don't get any credential challenges, regardless of where I try to sign in from ( my workstation, my server, my cellphone even ). The only thing that's seemed to work is viewing localhost and I donno what could be preventing authentication from finding it's way out of the server. Thanks for any help! Jon

    Read the article

  • Graphite SQLite3 DatabaseError: attempt to write a readonly database

    - by Anadi Misra
    Running graphite under apache httpd, with slqite database, I have the correct folder permissions [root@liaan55 httpd]# ls -ltr /var/lib | grep graphite drwxr-xr-x. 2 apache apache 4096 Aug 23 19:36 graphite-web and [root@liaan55 httpd]# ls -ltr /var/lib/graphite-web/ total 68 -rw-r--r--. 1 apache apache 65536 Aug 23 19:46 graphite.db syncdb also seems to have gone fine [root@liaan55 httpd]# sudo -su apache bash-4.1$ whoami apache bash-4.1$ python /usr/lib/python2.6/site-packages/graphite/manage.py syncdb /usr/lib/python2.6/site-packages/graphite/settings.py:231: UserWarning: SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security warn('SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security') /usr/lib/python2.6/site-packages/django/conf/__init__.py:75: DeprecationWarning: The ADMIN_MEDIA_PREFIX setting has been removed; use STATIC_URL instead. "use STATIC_URL instead.", DeprecationWarning) /usr/lib/python2.6/site-packages/django/core/cache/__init__.py:82: DeprecationWarning: settings.CACHE_* is deprecated; use settings.CACHES instead. DeprecationWarning Creating tables ... Creating table account_profile Creating table account_variable Creating table account_view Creating table account_window Creating table account_mygraph Creating table dashboard_dashboard_owners Creating table dashboard_dashboard Creating table events_event Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table django_session Creating table django_admin_log Creating table django_content_type Creating table tagging_tag Creating table tagging_taggeditem You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'apache'): root E-mail address: [email protected] Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) bash-4.1$ exit and the local-settings.py file is as follows STORAGE_DIR = '/var/lib/graphite-web' INDEX_FILE = '/var/lib/graphite-web/index' DATABASES = { 'default': { 'NAME': '/var/lib/graphite-web/graphite.db', 'ENGINE': 'django.db.backends.sqlite3', 'USER': '', 'PASSWORD': '', 'HOST': '', 'PORT': '' } } I still get this error [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] File "/usr/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 344, in execute [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] return Database.Cursor.execute(self, query, params) [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] DatabaseError: attempt to write a readonly database not sure what is missing in this configuration

    Read the article

  • Configuring OpenLDAP as a Active Directory Proxy

    - by vadensumbra
    We try to set up an Active Directory server for company-wide authentication. Some of the servers that should authenticate against the AD are placed in a DMZ, so we thought of using a LDAP-server as a proxy, so that only 1 server in the DMZ has to connect to the LAN where the AD-server is placed). With some googling it was no problem to configure the slapd (see slapd.conf below) and it seemed to work when using the ldapsearch tool, so we tried to use it in apache2 htaccess to authenticate the user over the LDAP-proxy. And here comes the problem: We found out the username in the AD is stored in the attribute 'sAMAccountName' so we configured it in .htaccess (see below) but the login didn't work. In the syslog we found out that the filter for the ldapsearch was not (like it should be) '(&(objectClass=*)(sAMAccountName=authtest01))' but '(&(objectClass=*)(?=undefined))' which we found out is slapd's way to show that the attribute do not exists or the value is syntactically wrong for this attribute. We thought of a missing schema and found the microsoft.schema (and the .std / .ext ones of it) and tried to include them in the slapd.conf. Which does not work. We found no working schemata so we just picked out the part about the sAMAccountName and build a microsoft.minimal.schema (see below) that we included. Now we get the more precise log in the syslog: Jun 16 13:32:04 breauthsrv01 slapd[21229]: get_ava: illegal value for attributeType sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH base="ou=oraise,dc=int,dc=oraise,dc=de" scope=2 deref=3 filter="(&(objectClass=\*)(?sAMAccountName=authtest01))" Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH attr=sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Using our Apache htaccess directly with the AD via LDAP works though. Anyone got a working setup? Thanks for any help in advance: slapd.conf: allow bind_v2 include /etc/ldap/schema/core.schema ... include /etc/ldap/schema/microsoft.minimal.schema ... backend ldap database ldap suffix "ou=xxx,dc=int,dc=xxx,dc=de" uri "ldap://80.156.177.161:389" acl-bind bindmethod=simple binddn="CN=authtest01,ou=GPO-Test,ou=xxx,dc=int,dc=xxx,dc=de" credentials=xxxxx .htaccess: AuthBasicProvider ldap AuthType basic AuthName "AuthTest" AuthLDAPURL "ldap://breauthsrv01.xxx.de:389/OU=xxx,DC=int,DC=xxx,DC=de?sAMAccountName?sub" AuthzLDAPAuthoritative On AuthLDAPGroupAttribute member AuthLDAPBindDN CN=authtest02,OU=GPO-Test,OU=xxx,DC=int,DC=xxx,DC=de AuthLDAPBindPassword test123 Require valid-user microsoft.minimal.schema: attributetype ( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )

    Read the article

  • How can I troubleshoot a "Hardware Malfunction" blue screen?

    - by AaronSieb
    My computer has suddenly started crashing to a blue screen with the following text: hardware malfunction call your hardware vendor for support *the system has halted* The crash occurs randomly during normal use. I have thus far always been able to reproduce it by transferring the contents of a large folder... But I'm not sure if this is caused by the file transfer, or simply because the transfer takes long enough for something else to trigger it. A bit about my hardware I have an dual core Intel CPU, and Asus motherboard. Video card is by nVidia, and connects via PCIe. My hard drives are in pairs, and connect via SATA to a RAID controller on the motherboard. They are configured to use a RAID0 configuration. What I've tried so far There is nothing in the Windows Event Log. WhoCrashed was unable to find any crash records. ScanDisk runs to completion (it launches prior to Windows load) and reports no errors. MemTest reports no errors (to 200% coverage). System temperatures are in the range of 40 to 50 degrees Celsius, with video card temperatures in the range of 60 to eighty degrees Celsius. I have stripped the system down to a minimal configuration (hard drive, video card, one memory module, motherboard, CPU, power supply). The problem still occurrs. However, this has allowed me to rule out a few components: It is not the video card because the problem still occurred after replacing the video card another one I had on hand. It is not the hard drive or anything software related because the problem occurred after a fresh installation of Windows on a replacement hard drive. It is not the hard drive cables because I replaced those with new ones and still had the problem. It is not the power supply because the problem still occurred after replacing the power supply with another one I had on hand. It is probably not the memory because I've tried three different memory modules in three different memory slots and was still able to replicate the issue. Is there anything I can do to confirm what's causing the issue? At the moment it seems as though it must be either the motherboard or CPU, but those are both difficult components to replace... In addition, both components are relatively new (two to three years old). I will gladly edit in any additional information I can get my hands on, and/or focus the question as I can find more details...

    Read the article

  • Django rewrites URL as IP address in browser - why?

    - by Mitch
    I am using django, nginx and apache. When I access my site with a URL (e.g., http://www.foo.com/) what appears in my browser address is the IP address with admin appended (e.g., http://123.45.67.890/admin/). When I access the site by IP, it is redirected as expected by django's urls.py (e.g., http://123.45.67.890/ - http://123.45.67.890/accounts/login/?next=/) I would like to have the name URL act the same way as the IP. That is, if the URL goes to a new view, the host in the browser address should remain the same and not change to the IP address. Where should I be looking to fix this? My files: ; cpa.com (apache) NameVirtualHost *:8080 <VirtualHost *:8080> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/javascript application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/htm DocumentRoot /path/to/root ServerName www.foo.com <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 </IfModule> <Directory /public/static> AllowOverride None AddHandler mod_python .py PythonHandler mod_python.publisher </Directory> Alias / /dj <Location /> SetHandler python-program PythonPath "['/usr/lib/python2.5/site-packages/django', '/usr/lib/python2.5/site-packages/django/forms'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE dj.settings PythonDebug On </Location> </VirtualHost> ; ; ports.conf (apache) Listen 127.0.0.1:8080 ; ; cpa.conf (nginx) server { listen 80; server_name www.foo.com; location /static { root /var/public; index index.html; } location /cpa/js { root /var/public/js; } location /cpa/css { root /var/public/css; } location /djmedia { alias "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"; } location / { include /etc/nginx/proxy.conf; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } } ; ; proxy.conf (nginx) proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 500; proxy_buffers 32 4k;

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • ActiveMQ Pure Master / Slave - Out of sync

    - by pico
    What i have : 1 master broker and 1 slave broker both in ActiveMQ 5.4.0 What i use : waitForSlave on master side and failover uri on slave side (in the master connector URI) What i want to do : I want to wait some delay (like 5 seconds) in case of a tiny network failures between master and slave before starting slave transpôrt connectors So i put this in slave config : <broker xmlns="http://activemq.apache.org/schema/core" brokerName="slave" dataDirectory="${activemq.base}/data" useJmx="true" persistent="true" populateJMSXUserID="true" masterConnectorURI="failover://(tcp://master:61616)?initialReconnectDelay=1000&amp;maxReconnectDelay=30000" shutdownOnMasterFailure="false" advisorySupport="false"> It seems to work but after a network hang between master and slave, the slave reconnect successfully and then the master logs a lot of : 2010-10-18 17:08:44,421 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ Task java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1040-634226732611718750-0:0 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:561) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:76) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) On the slave side everything is fine. So after that, i've tried to stop the master to see if the slave is capable of turning master after these "network hangs". The master took long time to shutdown (10 seconds) and then some error message appears in slave logs : 2010-10-18 17:09:32,915 | WARN | Async error occurred: java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 | org.apache.activemq.broker.TransportConnection.Service | VMTransport: vm://slave#5 java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:600) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) Are they any ways to keep my kaha stores (they are individual stores) synchronised? The main problem is that the slave never turn master after a master failure, it stay block on this message : 2010-10-18 17:09:33,681 | WARN | Transport (master/172.21.60.61:61616) failed to tcp://master:61616 , attempting to automatically reconnect due to: java.net.SocketException: Software caused connection abort: socket write error | org.apache.activemq.transport.failover.FailoverTransport | ActiveMQ Transport: tcp://master/172.21.60.61:61616 I'm totally stuck with these syncs problems, any help welcome! Regards

    Read the article

  • Can't install new database in OpenLDAP 2.4 with BDB on Debian

    - by Timothy High
    I'm trying to install an openldap server (slapd) on a Debian EC2 instance. I have followed all the instructions I can find, and am using the recommended slapd-config approach to configuration. It all seems to be just fine, except that for some reason it can't create my new database. ldap.conf.bak (renamed to ensure it's not being used): ########## # Basics # ########## include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema pidfile /var/run/slapd/slapd.pid argsfile /var/run/slapd/slapd.args loglevel none modulepath /usr/lib/ldap # modulepath /usr/local/libexec/openldap moduleload back_bdb.la database config #rootdn "cn=admin,cn=config" rootpw secret database bdb suffix "dc=example,dc=com" rootdn "cn=manager,dc=example,dc=com" rootpw secret directory /usr/local/var/openldap-data ######## # ACLs # ######## access to attrs=userPassword by anonymous auth by self write by * none access to * by self write by * none When I run slaptest on it, it complains that it couldn't find the id2entry.bdb file: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d bdb_db_open: database "dc=example,dc=com": db_open(/usr/local/var/openldap-data/id2entry.bdb) failed: No such file or directory (2). backend_startup_one (type=bdb, suffix="dc=example,dc=com"): bi_db_open failed! (2) slap_startup failed (test would succeed using the -u switch) Using the -u switch it works, of course. But that merely creates the configuration. It doesn't resolve the underlying problem: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d -u config file testing succeeded Looking in the database directory, the basic files are there (with right ownership, after a manual chown), but the dbd file wasn't created: root@server:/etc/ldap# ls -al /usr/local/var/openldap-data total 4328 drwxr-sr-x 2 openldap openldap 4096 Mar 1 15:23 . drwxr-sr-x 4 root staff 4096 Mar 1 13:50 .. -rw-r--r-- 1 openldap openldap 3080 Mar 1 14:35 DB_CONFIG -rw------- 1 openldap openldap 24576 Mar 1 15:23 __db.001 -rw------- 1 openldap openldap 843776 Mar 1 15:23 __db.002 -rw------- 1 openldap openldap 2629632 Mar 1 15:23 __db.003 -rw------- 1 openldap openldap 655360 Mar 1 14:35 __db.004 -rw------- 1 openldap openldap 4431872 Mar 1 15:23 __db.005 -rw------- 1 openldap openldap 32768 Mar 1 15:23 __db.006 -rw-r--r-- 1 openldap openldap 2048 Mar 1 15:23 alock (note that, because I'm doing this as root, I had to also change ownership of some of the files created by slaptest) Finally, I can start the slapd service, but it dies in the attempt (text from syslog): Mar 1 15:06:23 server slapd[21160]: @(#) $OpenLDAP: slapd 2.4.23 (Jun 15 2011 13:31:57) $#012#011@incagijs:/home/thijs/debian/p-u/openldap-2.4.23/debian/build/servers/slapd Mar 1 15:06:23 server slapd[21160]: config error processing olcDatabase={1}bdb,cn=config: Mar 1 15:06:23 server slapd[21160]: slapd stopped. Mar 1 15:06:23 server slapd[21160]: connections_destroy: nothing to destroy. I manually checked the olcDatabase={1}bdb file, and it looks fine to my amateur eye. All my specific configs are there. Unfortunately, syslog isn't reporting a specific error in this case (if it were a file permission error, it would say). I've tried uninstalling and reinstalling slapd, changing permissions, Googling my wits out, but I'm tapped out. Any OpenLDAP genius out there would be greatly appreciated!

    Read the article

  • Nginx and PHP-FPM running out of connections

    - by E3pO
    I keep running into errors like these, [02-Jun-2012 01:52:04] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 19 idle, and 49 total children [02-Jun-2012 01:52:05] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 19 idle, and 50 total children [02-Jun-2012 01:52:06] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 19 idle, and 51 total children [02-Jun-2012 03:10:51] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 18 idle, and 91 total children I changed my settings for php-fpm to these, pm.max_children = 150 (It was at 100, i got a max_children reached and upped to 150) pm.start_servers = 75 pm.min_spare_servers = 20 pm.max_spare_servers = 150 Resulting in [02-Jun-2012 01:39:19] WARNING: [pool www] server reached pm.max_children setting (150), consider raising it I've just launched a new website that is getting a conciderable amount of traffic on it. This traffic is legitimate and users are getting 504 gateway timeouts when the limit is reached. I have limited connections to my server with IPTABLES and I'm running fail2ban and keeping track of nginx access logs. The traffic is all legitimate, i'm just running out of room for users. I'm currently running on a dual core box with ubuntu 64bit. free total used free shared buffers cached Mem: 6114284 5726984 387300 0 141612 4985384 -/+ buffers/cache: 599988 5514296 Swap: 524284 5804 518480 My php.ini max_input_time = 60 My nginx config is worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 19000; # multi_accept on; } worker_rlimit_nofile 20000; #each connection needs a filehandle (or 2 if you are proxying) client_max_body_size 30M; client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; location ~ \.php$ { try_files $uri /er/error.php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; fastcgi_intercept_errors on; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } What can I do to stop running out of connections? Why does this keep occurring? I'm monitoring my traffic on Google Analytics realtime and when the user count gets above about 120 my php-fpm.log is full of these warnings..

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the administrators of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on StackOverflow to get feedback from the programmer/user side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates Repository-dictated Configuration If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • How to publish an ASP.NET MVC application to a free host

    - by Lirik
    Hi, I'm using a free web host (0000free) which supports ASP.NET MVC, but it uses Mono. This is the first time I deploy an MVC application, so I'm a little confused as to where I need to deploy it. I have Visual Studio 2010 and I used its Publish Feature (i.e. right click on the project name and click publish) and I tried several things: Publish method: FTP to the root folder. Publish method: FTP to the publich_html folder. Publish method: File System to the root folder. Publish method: File System to the publich_html folder. Publish method: File System to a local directory on my computer and then FTP to root and also tried the public_html folder. I went into the cPanel (control panel) to try and see if ASP.NET has to be added/enabled for my web site, but I didn't see anything there. I can't browse to Index.aspx nor can I redirect to it from index.html (as suggested from other posts on the host forum), right now I have a link from index.html to Index.aspx but it's not working either (see http://www.mydevarmy.com) I've also tried renaming Index.aspx to Default.aspx, but that doesn't work either. The search utility of the forum of the host is somewhat weak, so I use google to search their forum: http://www.google.com/search?q=publish+asp.net+site%3A0000free.com%2Fforum%2F&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a I've been reading Pro ASP.NET MVC Framework and they have a chapter about publishing, but it doesn't provide any specific information with respect to the location of publishing, this is all they say (and it's not very helpful in my case): Where Should I Put My Application? You can deploy your application to any folder on the server. When IIS first installs, it automatically creates a folder for a web site called Default Web Site at c:\Inetpub\wwwroot\, but you shouldn’t feel any obligation to put your application files there. It’s very common to host applications on a different physical drive from the operating system (e.g., in e:\websites\ example.com). It’s entirely up to you, and may be influenced by concerns such as how you plan to back up the server. Here is the exception I get when I try to view my Index.aspx page: Unrecognized attribute 'targetFramework'. (/home/devarmy/public_html/Web.config line 1) Description: HTTP 500. Error processing request. Stack Trace: System.Configuration.ConfigurationErrorsException: Unrecognized attribute 'targetFramework'. (/home/devarmy/public_html/Web.config line 1) at System.Configuration.ConfigurationElement.DeserializeElement (System.Xml.XmlReader reader, Boolean serializeCollectionKey) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationSection.DoDeserializeSection (System.Xml.XmlReader reader) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationSection.DeserializeSection (System.Xml.XmlReader reader) [0x00000] in <filename unknown>:0 at System.Configuration.Configuration.GetSectionInstance (System.Configuration.SectionInfo config, Boolean createDefaultInstance) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationSectionCollection.get_Item (System.String name) [0x00000] in <filename unknown>:0 at System.Configuration.Configuration.GetSection (System.String path) [0x00000] in <filename unknown>:0 at System.Web.Configuration.WebConfigurationManager.GetSection (System.String sectionName, System.String path, System.Web.HttpContext context) [0x00000] in <filename unknown>:0 at System.Web.Configuration.WebConfigurationManager.GetSection (System.String sectionName, System.String path) [0x00000] in <filename unknown>:0 at System.Web.Configuration.WebConfigurationManager.GetWebApplicationSection (System.String sectionName) [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.get_CompilationConfig () [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.Build (System.Web.VirtualPath vp) [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.GetCompiledType (System.Web.VirtualPath virtualPath) [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.GetCompiledType (System.String virtualPath) [0x00000] in <filename unknown>:0 at System.Web.HttpApplicationFactory.InitType (System.Web.HttpContext context) [0x00000] in <filename unknown>:0

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Error installing scipy on Mountain Lion with Xcode 4.5.1

    - by Xster
    Environment: Mountain Lion 10.8.2, Xcode 4.5.1 command line tools, Python 2.7.3, virtualenv 1.8.2 and numpy 1.6.2 When installing scipy with pip install -e "git+https://github.com/scipy/scipy#egg=scipy-dev" on a fresh virtualenv. llvm-gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return error: Command "/usr/bin/llvm-gcc -fno-strict-aliasing -Os -w -pipe -march=core2 -msse4 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/Users/xiao/.virtualenv/lib/python2.7/site-packages/numpy/core/include -c scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c -o build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.o" failed with exit status 1 Is it supposed to be looking for headers from my system frameworks? Is the development version of scipy no longer good for the latest version of Mountain Lion/Xcode?

    Read the article

  • ubuntu hardrive repartition without uninstalling ubuntu or windows 7 and losing data of hardrive

    - by user141692
    I have and asus r500v with 750 gb gpt system uefi motherboard core i7 3610qm, nvidia geforce gt, with ubuntu and w7 dual boot, I had problems installing ubuntu because of the grub but I fix it with https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/807801, but I still have the problem of "warning: the partition is misaligned by 3072 bytes. this may result iin very poor performance. Repartitioning is suggested" in every linux partitioin I made and my 750 gb is not being used at the maximun capacity it only uses 698 gb. I want to make partitions so that the warning doesnt show up and I can use the maximum capacity of the HDD, as I did with another dual boot laptop (compaq presario cq40). I have the following partitions: unknown 1.0Mb: partition type: lynux Basic DAta partition, device: /dev/sda2 Usage: --, Partition flags: --, partition label:-- warning: the partition is misaligned by 3072 bytes. this may result in very poor performance. repartitioning is suggested. -system 210 Mb FAt, usage: Filesystem, partition type: EFI system Partition, Partition Flags:--, Label: system, Device: /dev/sda1, partition label: EFI system partition, Capacity 210MB, avilable:--, Mount Point: mounted at /boot/efi -134 Mb NTFS, usage: filesystem, partition type: linux basic data partition, partition flags:.--, device: /dev/sda7, partition label: --, capacity: 134MB,available:--, mount point: not mounted -OS 250 GB NTFS, usage: file system, partititon type: linux basic data partition, partition flags: --, type: NTFS, label: OS, device: /dev/sda3, partition label: basic data partition, capacity: 250 GB, available:-, mount point: not mounted -10GB FAT 32, usage: filesystem, partition type: EFI system partition, partition flags:--, type: FAT 32, label: --, device: /dev/sda4, partition label: --, capacity: 10GB, available:--, mount point: not mounted warning: the partition is misaligned by 3072 bytes. this may result in very poor performance. repartitioning is suggested. -10gb ext 4, usage: file system, partition type: linux basic data partition, partition flags:--, type: EXT4(version1) label:--, device: /dev/sda9, partition label:--, capacity: 10 GB, available:--, mount point at / warning: the partition is misaligned by 1536 bytes. this may result in very poor performance. repartitioning is suggested. -478GB ext4, usage: filesystem, partition type: linux basic data partition, partition flags:--, type: EXT4, label:--, device: /dev/sda5, partition label:--, capacity: 478gb, available:--, mount point: mounted at /home warning: the partition is misaligned by 512 bytes. this may result in very poor performance. repartitioning is suggested. -2.0gb Swap 2.0Gb, usage: swap space, partition type: linux swap partitioin, partition flags:-, device: /dev/sda6, partition label: capacity: 2.0gb warning: the partition is misaligned by 512 bytes. this may result in very poor performance. repartitioning is suggested. and as you can see it is not well organized so please help me to organize the partitions witahout uninstalling the w7, and if possible the grub2

    Read the article

  • Elasticsearch won't start anymore

    - by Oleander
    I restarted my elasticsearch instance 5 days ago and I haven't manage to start it since then. I get no output in the log file /var/log/elasticsearch/ nor does the elasticsearch binary print any information when running at using elasticsearch -f. I once manage to get this output. [2012-11-15 22:51:18,427][INFO ][node ] [Piper] {0.19.11}[29584]: initializing ... [2012-11-15 22:51:18,433][INFO ][plugins ] [Piper] loaded [], sites [] Running curl http://localhost:9200 resulted in curl: (7) couldn't connect to host. I've tried increasing the memory from 3gb to 10gb, but that didn't make any diffrence. Running /etc/init.d/elasticsearch start takes 30 seconds. ps aux | grep elasticsearch results in this output. /usr/local/share/elasticsearch/bin/service/exec/elasticsearch-linux-x86-64 /usr/local/share/elasticsearch/bin/service/elasticsearch.conf wrapper.syslog.ident=elasticsearch wrapper.pidfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.pid wrapper.name=elasticsearch wrapper.displayname=ElasticSearch wrapper.daemonize=TRUE wrapper.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.status wrapper.java.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.java.status wrapper.script.version=3.5.14 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Delasticsearch-service -Des.path.home=/usr/local/share/elasticsearch -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Xms1024m -Xmx1024m -Djava.library.path=/usr/local/share/elasticsearch/bin/service/lib -classpath /usr/local/share/elasticsearch/bin/service/lib/wrapper.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/jna-3.3.0.jar:/usr/local/share/elasticsearch/lib/log4j-1.2.17.jar:/usr/local/share/elasticsearch/lib/lucene-analyzers-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-core-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-highlighter-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-memory-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-queries-3.6.1.jar:/usr/local/share/elasticsearch/lib/snappy-java-1.0.4.1.jar:/usr/local/share/elasticsearch/lib/sigar/sigar-1.6.4.jar -Dwrapper.key=k7r81VpK3_Bb3N_5 -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=23888 -Dwrapper.version=3.5.14 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp org.elasticsearch.bootstrap.ElasticSearchF My current system: ElasticSearch Version: 0.19.11, JVM: 23.2-b09 Ubuntu 12.04 LTS I've tried re-install elasticsearch, removing old directories. Why can't I get it to start?

    Read the article

  • How to diagnose repeated "Starting up database '<dbname>'"

    - by Richard Slater
    I have a SQL 2008 server which is predominantly used as a development server, in the last two weeks it has been having occasional "fits", I have isolated the cause of these fits as CHECKDB being run almost continuiously, the following log information is logged to the Windows Event Log (Source: MSSQLSERVER, Category: Server): Event: 1073758961, Message: Starting up database 'DBName1'. Event: 1073758961, Message: Starting up database 'DBName2'. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. This is repeated every 1-2 seconds untill SQL Server is restarted or the offending databases are detatched. I initially thought that it was a problem with the databases so I took a backup and restored them to a SQL Express instance, all of the data is in tact, and CHECKDB runs without problem. The two databases that were causing a problem last week were not being used; so I took full backups of them and detached the databases, this resolved the problem. However at 0100 GMT this morning to other totally unrelated databases started showing the same problems. There is nothing in the event log to suggest that something happened to the server such as a restart, there are no messages about processes crashing or issues being detected with the storage controller. Speaking to the owner of the company this computer has suffered from "gremlins" in the past, however advice was taken and the motherboard was replaced and the computer rebuilt, memory and processor are the same. Stats: O/S: Windows 2008 Standard Build 6002 CPU: 2x Pentium Dual-Core E5200 @ 2.5GHz RAM: 2GB SQL: 2008 Standard 10.0.2531 Edit: someone posted then deleted a comment about AutoClose, it was turned on on the databases affected. It seems that best practice is to disable it so I have done that with the folllowing. EXECUTE sp_MSforeachdb 'IF (''?'' NOT IN (''master'', ''tempdb'', ''msdb'', ''model'')) EXECUTE (''ALTER DATABASE [?] SET AUTO_CLOSE OFF WITH NO_WAIT'')' I won't know if the problem recurs for some time so I am still open to further answers.

    Read the article

  • Chrome does not re-draw properly on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites? Computer Freeze: Chrome sometimes, randomly, freezes my computer. Freeze in the sense, none of the other apps are also working. It's like the whole system freezes, I can not even switch to other apps. I am sure that this is because of Chrome since this happens only when Chrome is active. Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). UPDATE 3: @NothingsImpossible said that He is also experiencing the same problem on Windows Server 2008! This seams to be a major issue now. He also said that GPU load is also high at the same time! Even I saw the same thing. UPDATE 4: Recently, Chrome updated to v24 Stable (I am using stable from a long time now). I was experiencing this problem a lot less in Chrome 23, but this is back in Chrome 24. Seams like Chrome 24 is the most affected from this bug, as this same problem was high in Chrome 24 beta also. UPDATE 5: Chrome was updated to v25 Stable. This problem is 99% Gone, it is still there in 1% of the cases. One such example is when I leave chrome inactive for a while with a few tabs open, the tabs go black and no activity can get them back to active state. If I open a new tab, the new tab is OK but the others are still black, I need to close all those tabs. UPDATE 6: Chrome updated to v27 Stable channel, this problem is nearly gone. This does happen occasionally, but not as frequent as in earlier versions of Chrome. UPDATE 7: I am on Chrome v35.0.1916.114 Stable, Windows 8.1 Pro Update 1. Some of the other problems appears to be back. Chrome is slow and laggy again. Re-draw time is getting worse. Is anybody else experiencing such issues? Does anybody have a solution to any of these?

    Read the article

  • Cannot install passenger with Nginx

    - by Luc
    Hello, I have a rack application that I want to migrate from Ruby 1.8.7 + Apache + passenger to Ruby 1.9.1 + Nginx + passenger. I have made up the following script for a quick install all in one, and it raises an error... Here is the installation script: (basic one with all the steps I need to install everything on a Ubuntu 10.04 Lucid Lynx fresh box) Nginx sources cd /tmp wget http://nginx.org/download/nginx-0.7.66.tar.gz tar xzf nginx-0.7.66.tar.gz cd nginx-0.7.66 openssl required for SSL/TLS sudo apt-get install openssl sudo apt-get install libssl-dev Compilation stuff sudo apt-get zlib1g-dev Ruby interpreter 1.9.1 sudo apt-get install ruby1.9.1 ruby1.9.1-dev rubygems1.9.1 irb1.9.1 ri1.9.1 rdoc1.9.1 build-essential nginx libopenssl-ruby1.9.1 Make sure default ruby uses version 1.9.1 sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.1 400 --slave /usr/share/man/man1/ruby.1.gz ruby.1.gz /usr/share/man/man1/ruby1.9.1.1.gz --slave /usr/bin/ri ri /usr/bin/ri1.9.1 --slave /usr/bin/irb irb /usr/bin/irb1.9.1 --slave /usr/bin/rdoc rdoc /usr/bin/rdoc1.9.1 sudo update-alternatives --config ruby Passenger (rake-0.8.7, fastthread-1.0.7, rack-1.1.0, passenger-2.2.14) sudo gem install passenger Activate Passenger in nginx, select option 2 to use nginx sources donwloaded above cd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin sudo ./passenger-install-nginx-module And this is the error message I got: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/ContentHandler.c gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /tmp/pcre-8.00 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx/StaticContentHandler.o \ /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c: In function ‘passenger_static_content_handler’: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c:71: error: ‘ngx_http_request_t’ has no member named ‘zero_in_uri’ make[1]: *** [objs/addon/nginx/StaticContentHandler.o] Error 1 make[1]: Leaving directory `/tmp/nginx-0.7.66' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /var/lib/gems/1.9.1/gems/passenger-2.2.14/doc/Users guide Nginx.html I do not understand the reason of this error. Is this a compatibility problem ? Hope you have any clues :) Thanks a lot, Luc

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • Xen kernel can't see 2 disks of 6 of 1TB, does it have a limitation?

    - by PartySoft
    Linux gentoo-xen 2.6.18-xen-r12 #3 SMP Tue Oct 5 09:28:53 PDT 2010 x86_64 Intel(R) Xeon(R) CPU E5506 @ 2.13GHz GenuineIntel GNU/Linux I have 6 disks of 1 TB and i can't see all of them only 4, can anyone give me an ideea what can i do ? Filesystem Size Used Avail Use% Mounted on rootfs 886G 4.4G 836G 1% / /dev/sda3 886G 4.4G 836G 1% / rc-svcdir 1.0M 44K 980K 5% /lib64/rc/init.d shm 7.9G 0 7.9G 0% /dev/shm /dev/sdb1 917G 200M 871G 1% /home2 /dev/sdc1 917G 200M 871G 1% /home3 /dev/sdd1 917G 200M 871G 1% /home4 The hardware is Dual xeon E5506 processors on a supermicro X8DTL mobo 4.346585] ata3.00: ATA-8, max UDMA/133, 1953525168 sectors: LBA48 NCQ (depth 0/32) [ 4.346588] ata3.00: ata3: dev 0 multi count 16 [ 4.352861] ata3.00: configured for UDMA/133 [ 4.352867] scsi3 : ata_piix [ 4.352875] PM: Adding info for No Bus:host3 [ 4.510584] ata4.00: ATA-8, max UDMA/133, 1953525168 sectors: LBA48 NCQ (depth 0/32) [ 4.510587] ata4.00: ata4: dev 0 multi count 16 [ 4.516848] ata4.00: configured for UDMA/133 [ 4.516861] PM: Adding info for No Bus:target2:0:0 [ 4.516905] Vendor: ATA Model: SAMSUNG HD103SJ Rev: 1AJ1 [ 4.516910] Type: Direct-Access ANSI SCSI revision: 05 [ 4.516920] PM: Adding info for scsi:2:0:0:0 [ 4.517452] SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.517460] sde: Write Protect is off [ 4.517461] sde: Mode Sense: 00 3a 00 00 [ 4.517478] SCSI device sde: drive cache: write back [ 4.517514] SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.517521] sde: Write Protect is off [ 4.517522] sde: Mode Sense: 00 3a 00 00 [ 4.517532] SCSI device sde: drive cache: write back [ 4.517534] sde: sde1 [ 4.524551] sd 2:0:0:0: Attached scsi disk sde [ 4.524855] sd 2:0:0:0: Attached scsi generic sg4 type 0 [ 4.524874] PM: Adding info for No Bus:target3:0:0 [ 4.524928] Vendor: ATA Model: SAMSUNG HD103SJ Rev: 1AJ1 [ 4.524933] Type: Direct-Access ANSI SCSI revision: 05 [ 4.524946] PM: Adding info for scsi:3:0:0:0 [ 4.525216] SCSI device sdf: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.525227] sdf: Write Protect is off [ 4.525228] sdf: Mode Sense: 00 3a 00 00 [ 4.525242] SCSI device sdf: drive cache: write back [ 4.525280] SCSI device sdf: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.525286] sdf: Write Protect is off [ 4.525289] sdf: Mode Sense: 00 3a 00 00 [ 4.525301] SCSI device sdf: drive cache: write back [ 4.525302] sdf: sdf1 [ 4.532691] sd 3:0:0:0: Attached scsi disk sdf [ 4.533010] sd 3:0:0:0: Attached scsi generic sg5 type 0 [ 4.977669] scsi: <fdomain> Detection failed (no card) [ 5.030479] GDT-HA: Storage RAID Controller Driver. Version: 3.05 [ 5.030635] GDT-HA: Found 0 PCI Storage RAID Controllers [ 5.372350] Fusion MPT base driver 3.04.01 [ 5.372358] Copyright (c) 1999-2005 LSI Logic Corporation [ 5.579176] Fusion MPT SPI Host driver 3.04.01 [ 5.881777] ieee1394: Initialized config rom entry `ip1394' [ 6.166745] ieee1394: sbp2: Driver forced to serialize I/O (serialize_io=1) [ 6.166748] ieee1394: sbp2: Try serialize_io=0 for better performance [ 6.428866] md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 [ 6.428872] md: bitmap version 4.39 [ 6.431518] md: raid0 personality registered for level 0 [ 6.495979] md: raid1 personality registered for level 1 [ 6.570270] raid5: automatically using best checksumming function: generic_sse [ 6.575523] generic_sse: 6608.000 MB/sec [ 6.575526] raid5: using function: generic_sse (6608.000 MB/sec) [ 6.596226] raid6: int64x1 1835 MB/s [ 6.613231] raid6: int64x2 1773 MB/s [ 6.630256] raid6: int64x4 1675 MB/s [ 6.647296] raid6: int64x8 1027 MB/s [ 6.664267] raid6: sse2x1 3578 MB/s [ 6.681268] raid6: sse2x2 4207 MB/s [ 6.698280] raid6: sse2x4 4625 MB/s [ 6.698281] raid6: using algorithm sse2x4 (4625 MB/s) [ 6.698285] md: raid6 personality registered for level 6 [ 6.698286] md: raid5 personality registered for level 5 [ 6.698288] md: raid4 personality registered for level 4 [ 6.781090] md: raid10 personality registered for level 10 [ 7.007043] Intel(R) PRO/1000 Network Driver - version 7.1.9-k4 [ 7.007046] Copyright (c) 1999-2006 Intel Corporation. [ 9.229465] kjournald starting. Commit interval 5 seconds [ 9.229476] EXT3-fs: mounted filesystem with ordered data mode.

    Read the article

  • Windows 7 Bluescreen: IRQ_NOT_LESS_OR_EQUAL | athrxusb.sys

    - by wretrOvian
    I'd left my system on last night, and found the bluescreen in the morning. This has been happening occasionally, over the past few days. Details: ================================================== Dump File : 022710-18236-01.dmp Crash Time : 2/27/2010 8:46:44 AM Bug Check String : DRIVER_IRQL_NOT_LESS_OR_EQUAL Bug Check Code : 0x000000d1 Parameter 1 : 00000000`00001001 Parameter 2 : 00000000`00000002 Parameter 3 : 00000000`00000000 Parameter 4 : fffff880`06b5c0e1 Caused By Driver : athrxusb.sys Caused By Address : athrxusb.sys+760e1 File Description : Product Name : Company : File Version : Processor : x64 Computer Name : Full Path : C:\Windows\minidump\022710-18236-01.dmp Processors Count : 2 Major Version : 15 Minor Version : 7600 ================================================== HiJackThis ("[...]" indicates removed text; full log [posted to pastebin][1]): Logfile of Trend Micro HijackThis v2.0.2 Scan saved at 8:49:15 AM, on 2/27/2010 Platform: Unknown Windows (WinNT 6.01.3504) MSInternet Explorer: Internet Explorer v8.00 (8.00.7600.16385) Boot mode: Normal Running processes: C:\Windows\DAODx.exe C:\Program Files (x86)\Asus\EPU\EPU.exe C:\Program Files\Asus\TurboV\TurboV.exe C:\Program Files (x86)\PowerISO\PWRISOVM.EXE C:\Program Files (x86)\OpenOffice.org 3\program\soffice.exe C:\Program Files (x86)\OpenOffice.org 3\program\soffice.bin D:\Downloads\HijackThis.exe C:\Program Files (x86)\uTorrent\uTorrent.exe R1 - HKCU\Software\Microsoft\Internet Explorer\[...] [...] O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files (x86)\Java\jre6\bin\jp2ssv.dll O4 - HKLM\..\Run: [HDAudDeck] C:\Program Files (x86)\VIA\VIAudioi\VDeck\VDeck.exe -r O4 - HKLM\..\Run: [StartCCC] "C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static\CLIStart.exe" MSRun O4 - HKLM\..\Run: [TurboV] "C:\Program Files\Asus\TurboV\TurboV.exe" O4 - HKLM\..\Run: [PWRISOVM.EXE] C:\Program Files (x86)\PowerISO\PWRISOVM.EXE O4 - HKLM\..\Run: [googletalk] C:\Program Files (x86)\Google\Google Talk\googletalk.exe /autostart O4 - HKLM\..\Run: [AdobeCS4ServiceManager] "C:\Program Files (x86)\Common Files\Adobe\CS4ServiceManager\CS4ServiceManager.exe" -launchedbylogin O4 - HKCU\..\Run: [uTorrent] "C:\Program Files (x86)\uTorrent\uTorrent.exe" O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-19\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'NETWORK SERVICE') O4 - HKUS\S-1-5-20\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'NETWORK SERVICE') O4 - Startup: OpenOffice.org 3.1.lnk = C:\Program Files (x86)\OpenOffice.org 3\program\quickstart.exe O13 - Gopher Prefix: O23 - Service: @%SystemRoot%\system32\Alg.exe,-112 (ALG) - Unknown owner - C:\Windows\System32\alg.exe (file missing) O23 - Service: AMD External Events Utility - Unknown owner - C:\Windows\system32\atiesrxx.exe (file missing) O23 - Service: Asus System Control Service (AsSysCtrlService) - Unknown owner - C:\Program Files (x86)\Asus\AsSysCtrlService\1.00.02\AsSysCtrlService.exe O23 - Service: DeviceVM Meta Data Export Service (DvmMDES) - DeviceVM - C:\Asus.SYS\config\DVMExportService.exe O23 - Service: @%SystemRoot%\system32\efssvc.dll,-100 (EFS) - Unknown owner - C:\Windows\System32\lsass.exe (file missing) O23 - Service: ESET HTTP Server (EhttpSrv) - ESET - C:\Program Files\ESET\ESET NOD32 Antivirus\EHttpSrv.exe O23 - Service: ESET Service (ekrn) - ESET - C:\Program Files\ESET\ESET NOD32 Antivirus\x86\ekrn.exe O23 - Service: @%systemroot%\system32\fxsresm.dll,-118 (Fax) - Unknown owner - C:\Windows\system32\fxssvc.exe (file missing) O23 - Service: FLEXnet Licens

    Read the article

  • Network Restructure Method for Double-NAT network

    - by Adrian
    Due to a series of poor network design decisions (mostly) made many years ago in order to save a few bucks here and there, I have a network that is decidedly sub-optimally architected. I'm looking for suggestions to improve this less-than-pleasant situation. We're a non-profit with a Linux-based IT department and a limited budget. (Note: None of the Windows equipment we have runs does anything that talks to the Internet nor do we have any Windows admins on staff.) Key points: We have a main office and about 12 remote sites that essentially double NAT their subnets with physically-segregated switches. (No VLANing and limited ability to do so with current switches) These locations have a "DMZ" subnet that are NAT'd on an identically assigned 10.0.0/24 subnet at each site. These subnets cannot talk to DMZs at any other location because we don't route them anywhere except between server and adjacent "firewall". Some of these locations have multiple ISP connections (T1, Cable, and/or DSLs) that we manually route using IP Tools in Linux. These firewalls all run on the (10.0.0/24) network and are mostly "pro-sumer" grade firewalls (Linksys, Netgear, etc.) or ISP-provided DSL modems. Connecting these firewalls (via simple unmanaged switches) is one or more servers that must be publically-accessible. Connected to the main office's 10.0.0/24 subnet are servers for email, tele-commuter VPN, remote office VPN server, primary router to the internal 192.168/24 subnets. These have to be access from specific ISP connections based on traffic type and connection source. All our routing is done manually or with OpenVPN route statements Inter-office traffic goes through the OpenVPN service in the main 'Router' server which has it's own NAT'ing involved. Remote sites only have one server installed at each site and cannot afford multiple servers due to budget constraints. These servers are all LTSP servers several 5-20 terminals. The 192.168.2/24 and 192.168.3/24 subnets are mostly but NOT entirely on Cisco 2960 switches that can do VLAN. The remainder are DLink DGS-1248 switches that I am not sure I trust well enough to use with VLANs. There is also some remaining internal concern about VLANs since only the senior networking staff person understands how it works. All regular internet traffic goes through the CentOS 5 router server which in turns NATs the 192.168/24 subnets to the 10.0.0.0/24 subnets according to the manually-configured routing rules that we use to point outbound traffic to the proper internet connection based on '-host' routing statements. I want to simplify this and ready All Of The Things for ESXi virtualization, including these public-facing services. Is there a no- or low-cost solution that would get rid of the Double-NAT and restore a little sanity to this mess so that my future replacement doesn't hunt me down? Basic Diagram for the main office: These are my goals: Public-facing Servers with interfaces on that middle 10.0.0/24 network to be moved in to 192.168.2/24 subnet on ESXi servers. Get rid of the double NAT and get our entire network on one single subnet. My understanding is that this is something we'll need to do under IPv6 anyway, but I think this mess is standing in the way.

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >