Search Results

Search found 7217 results on 289 pages for 'jboss cache'.

Page 233/289 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Zero downtime uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Server load high, CPU idle. NFS the cause?

    - by Mech Software
    I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0 0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0 0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0 0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0 6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0 4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0 4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0 3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0 3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0 5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0 0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0 0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0 4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0 3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0 3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0 From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?

    Read the article

  • django : Serving static files through nginx

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • Online resizing of kvm guest root filesystem?

    - by Bittrance
    I have a Linux guest that uses an LVM volume directly as root file system (that is, there is no partition table). libvirt config looks thus: <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <kernel>/boot/vmlinuz-X.Y.Z.el6.x86_64</kernel> <initrd>/boot/initramfs-X.Y.Z.el6.x86_64.img</initrd> <cmdline>console=ttyS0 root=/dev/vda</cmdline> <boot dev='hd'/> </os> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/vg/guest'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> From inside the guest: $ mount /dev/vda on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Is it possible to resize the guest's root partition without rebooting the guest? Just doing lvextend on the host and resize2fs from the guest does not seem to be enough.

    Read the article

  • Some Can reach bidmail.com others can't.

    - by user69426
    On a windows 7 Professional machine in Chrome one of our Estimating assistants can't get to www.bidmail.com, however the other 3 can. On his machine I did nslookup then bidmail.com and it fails to find it. I then went to a machine that could reach bidmail and did nslookup. It can't find it. I was skeptical and thought maybe it was a cached page so I cleared the cache then went back to bidmail.com was able to get to the page, login, lookup a newly posted bid then download the file. Yet I can not look it up through nslookup and I can't ping it www.bidmail.com and I can't trace it. I remoted to our other warehouse which is set up as a workgroup and attempt to nslookup bidmail and that nslookup fail... and on that machine which has never been to bidmail before it was able to connect to the website! I am totally confused if I can't ping it and I can't use nslookup to get there how in the hell is Chrome getting to the page and how do I get this guy back on? Also while typing this I took a new laptop out of the box plugged it in with no updates and can get to bidmail! omg!

    Read the article

  • Users database empty after Samba3 to Samba4 migration on different servers

    - by ouzmoutous
    I have to migrate a Samba 3 to a new Samba 4 server. My problem is that the database on the samba 3 server seems a bit empty. The secrets.dtb file is only 20K whereas the “pbedit -L |wc -l”command give me 16970 lines. On my Samba3 /var/lib/samba is 1,5M After I had migrate the databse (following instructions on http://dev.tranquil.it/index.php/SAMBA_-_Migration_Samba3_Samba4), “pdbedit -L” command on the new server give me only : SAMBA4$, Administrator, dns-samba4, krbtgt and nobody. So I tried to create a VM with a Samba3. I added some users, done the same things I did for the migration and now I can see the users created on the VM. It’s like users on the Samba 3 server are in a sort of cache. I already migrate the /etc/{passwd,shadow,group} files and I can see users with the “getent passwd” command. Any ideas why my users are present when I use pdbedit but the database is so empty ? The global part of my smb.conf on the Samba 3 server : [global] workgroup = INTERNET netbios name = PDC-SMB3 server string = %h server interfaces = eth0 obey pam restrictions = Yes passdb backend = smbpasswd passwd program = /usr/bin/passwd %u passwd chat = *new* %n\n *Re* %n\n *pa* username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%U max log size = 1000 socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 add user script = /usr/sbin/useradd -s /bin/false -m '%u' -g users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null '%u' -g machines logon script = logon.cmd logon home = \\$L\%U domain logons = Yes os level = 255 preferred master = Yes local master = Yes domain master = Yes dns proxy = No ldap ssl = no panic action = /usr/share/samba/panic-action %d invalid users = root admin users = admin, root, administrateur log level = 2

    Read the article

  • setting up git on cygwin - openssl

    - by user23020
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • Why can't I get rid of default index.html even if I disable the default virtual host in Apache2?

    - by Emre Sevinç
    I have created a virtual host settings file and I disabled the default settings by using a2dissite default (this is a pretty standard Ubuntu 10.04 installation). But no matter what I try my Apache2 server simply keeps on displaying the default index.html page instead of the index.php page that I set up in the virtual host file. Can someone help me what I'm missing. Details follow: No default settings: ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 51 May 5 13:32 webmin.1273066327.conf -> /etc/apache2/sites-available/webmin.1273066327.conf lrwxrwxrwx 1 root root 34 May 30 11:03 www.accontax.be -> ../sites-available/www.accontax.be Contents of the relevant virtual host: cat /etc/apache2/sites-enabled/www.accontax.be <VirtualHost *> ServerName www.accontax.be ServerAlias accontax.be DirectoryIndex index.php DocumentRoot /var/www/drupal/ <Directory /var/www/drupal/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Contents of httpd.conf: cat /etc/apache2/httpd.conf Listen 80 NameVirtualHost * I also have those relevant lines in my apache2.conf: # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ When I visit http://www.accontax.be I expect apache2 server go to the /var/www/drupal subdirectory and start serving index.php but it simply keeps on serving index.html from /var/www directory. I have reloaded the configuration, restarted the server, deleted my browser cache. Nothing changed. Probably I'm missing a simple yet crucial step but I just could not find it.

    Read the article

  • ajax.googleapis.com stopping my Firefox

    - by Oscar Reyes
    Today for some strange reason, Firefox stops working properly because it is trying to fetch something from ajax.googleapis.com. Is there something I can do to avoid this? Safari and Chrome work just fine. I tried uninstalling Firebug and clearing the cache. The only thing that worked was disabling the JavaScript altogether. This seems to be the culprit link: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js What can I do? EDIT I think I have found where the problem is. My proxy is serving one byte at a time the file, so firefox consume it at that peace. What I don't understand is why Safari and Chrome takes it right away. What I did last night was, leave the FF open all the night to give him change to load the file, my hope was that I got cached and the next time there was no need to go for it. Today in the morning, the page load successfully but the page was not cached, because the next request failed the same. Here's a video showing the problem:

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • Adding new SPNs to existing service ids

    - by jmh
    We have a tomcat server using spring-security kerberos to authenticate users to the webpage against active directory. There are around 25 domain controllers. The site has two CNAME based DNS aliases. The site currently has one Service ID with SPNs registered for the DNS A record as well as each of the CNAMEs. While everything is working right now, I don't know how to reliably change this configuration without possible downtime. The reason is that clients cache kerberos tickets: http://www.juniper.net/techpubs/en_US/uac4.2/topics/concept/user-role-active-directory-about.html The 'kerbtray.exe' program is helpful for viewing and deleting Kerberos tickets on the endpoint. Old tickets must be purged from the endpoint if SPNs are updated or passwords are changed (assuming the endpoint still has a cached copy of the ticket from a prior SPNEGO request to the MAG Series device. During testing, you should purge tickets before each authentication request. Description of "klist" program used to inspect/delete cached tickets: http://technet.microsoft.com/en-us/library/hh134826.aspx So if each of the clients (users running windows) who connect to my web server have kerberos tickets that become invalid as soon as I update the SPNs or passwords, how do I ensure changes are seamless? Are there any operations that can be done safely? I can't just ask all of the users to install klist and delete their old tickets.

    Read the article

  • Adobe premiere CS5 problem with the display driver

    - by user30179
    This error is really hindering our project. I get an error, it started showing-up June 16th 2010. There are no windows updates at the on the same date as the error, other than (Windows Defender) Seems to happen when working with Image overlays. ERROR: "The NVIDIA OpenGL driver detected a problem with the display driver and is unable to continue. The application must close." We opened the side of the case in the possibility there is an over heating problem. Nvidia Driver ver 8.16.11.9175 (nVidia Quadro FX 1700) I am running: Windows 7 x64 Adobe premiere CS5 Production nVidia Quadro FX 1700 (MRGA14L) 4 Gig ram RAID 10 2 750GB drives Duo core 3.0 6MB L2 Cache This is at least three other people that have come across this error: NVidia Forum EVGA Forum NVidia Forum UPDATE: Having the case open did not help. I also installed New Nvidia drivers now I get a different error: *ERROR:*Your hardware configuration does not meet minimum specifications needed to run the application. The application must close. I ran Windows Update and installed all four updates so now I am waiting to see if the error occurs again. Anything beyond this I am out of options.

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • Cannot find FIS partition 'initramfs'......... need help!!!

    - by vikramtheone
    Hi Guys, I have a Ubuntu 9.04 Linux running on Freescale's i.MX515(ARM Cortex based) board with me. There were about 250 updates pending and I did that today, well some of the updates failed because of the infamous errors: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. E: Couldn't rebuild package cache E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So, when I do the 'sudo dpkg --configure -a' I get new errors related to FIS partition: Cannot find FIS partition 'initramfs' User postinst hook script [/usr/sbin/flash-kernel] exited with value 1 dpkg: error processing linux-image-2.6.28-18-imx51 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of linux-image-imx51: linux-image-imx51 depends on linux-image-2.6.28-18-imx51; however: Package linux-image-2.6.28-18-imx51 is not configured yet. dpkg: error processing linux-image-imx51 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-imx51: linux-imx51 depends on linux-image-imx51 (= 2.6.28.18.23); however: Package linux-image-imx51 is not configured yet. dpkg: error processing linux-imx51 (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.28-18-imx51 Cannot find FIS partition 'initramfs' dpkg: subprocess post-installation script returned error exit status 1 Whats going wrong here, need help!!! I'm a newbie. Regards Vikram

    Read the article

  • apache performance timing out

    - by Mike
    Im running a webserver where I'm hosting about 6-7 websites. Most of these websites get their content from MySQL which is hosted on the same server. Traffic average per day is about 500-600 unique visitors, about 150K hits per week. But for some reason sometimes websites send a timeout, OR sometimes websites dont load all images. I know that I should perhaps separate static content from dynamic content, but for now I think that's not a possibility. I would appreciate any suggestions on how could I improve the performance of apache, so it doesn't keep timing out. Server is running on Sempron LE 1300; 2.3GHz,512K Cache 2GB RAM 10Mbps/1Mbps Services: MySQL, ProFTPD, Apache. Private + Shared = RAM used Program ---------------------------------------------------- 1.2 MiB + 54.0 KiB = 1.2 MiB proftpd 4.1 MiB + 23.0 KiB = 4.1 MiB munin-node 20.8 MiB + 120.5 KiB = 20.9 MiB mysqld 47.3 MiB + 9.9 MiB = 57.3 MiB apache2 (22) top: Mem: 2075356k total, 1826196k used, 249160k free, Timeout 35 KeepAlive On MaxKeepAliveRequests 300 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 10 MinSpareServers 20 MaxSpareServers 20 MaxClients 60 MaxRequestsPerChild 1000 </IfModule> <IfModule mpm_worker_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • VMWare Workstation Linux Host performance tuning

    - by Hoghweed
    I need to improve my linux hosted vmware workstation for using multiple virtual machines at the same time. I feel very stupid I lost a great blog post link which I found last month (and I'm not able to find it again..) so I try to ask here if anyone can help me: This is my host (laptop): 16GB DDR3 Ram HDD Hybrid 750GB 7200 (8GB SSD Cache) Mint 15 x64 Kernel 3.9.7 swappiness set to 10 The above are the important things about the host. So, My need is the ability to run 2 or 3 VMs at the same time. The lack of performance is about the disk, The last time from that blog post I lost, I setup /tmp to be mounted ad a memory partition and in my previous installation that was good, now I'm not able to find a good solution to tweak the things. I think with 16GB o RAM there will be no problems to run multiple VMs, but whe they start to swap or use the /tmp things going bad (guest cursor going too fast after a freeze, guest freeze and so on) Anyone can help me to fit a good host tweak and configuration to get better performance? Thanks in advance

    Read the article

  • Is there a command like pstree for libraries?

    - by flashnode
    I need to determine whether a library named libunaSA.so is being called directly by the process or by another library called libtoki2.so. I guess what I'm looking for is a pstree for libraries. The system is running RHEL 5.3 Beta. This output shows the two libraries in the process map # grep -e toki -e una /proc/2335/maps 0043f000-004ad000 r-xp 00000000 08:02 543465 /usr/lib/libtoki2.so 004ad000-004c5000 rwxp 0006d000 08:02 543465 /usr/lib/libtoki2.so 01185000-01397000 r-xp 00000000 08:02 543503 /usr/lib/libunaSA.so 01397000-013dc000 rwxp 00211000 08:02 543503 /usr/lib/libunaSA.so This output shows that only the libtoki2.so library is in the current cache # ldconfig -p | grep -e una -e toki libtoki2.so (libc6) => /usr/lib/libtoki2.so libtoki.so.4.4.1 (libc6) => /usr/lib/libtoki.so.4.4.1 libtoki.so.2 (libc6) => /usr/lib/libtoki.so.2 I attached strace to the running process but it doesn't provide much output # strace -p 2335 Process 2335 attached - interrupt to quit futex(0xb7ef5bd8, FUTEX_WAIT, 2336, NULL Here's the output to ldd for each library # ldd /usr/lib/libtoki2.so linux-gate.so.1 => (0x00a0a000) libdl.so.2 => /lib/libdl.so.2 (0x001bd000) libstdc++-libc6.2-2.so.3 => /usr/lib/libstdc++-libc6.2-2.so.3 (0x00f3f000) libm.so.6 => /lib/libm.so.6 (0x00b27000) libc.so.6 => /lib/libc.so.6 (0x0043d000) /lib/ld-linux.so.2 (0x00742000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00110000) # ldd /usr/lib/libunaSA.so linux-gate.so.1 => (0x00244000) libpthread.so.0 => /lib/libpthread.so.0 (0x00baf000) libdl.so.2 => /lib/libdl.so.2 (0x007fa000) libstdc++-libc6.2-2.so.3 => /usr/lib/libstdc++-libc6.2-2.so.3 (0x009ce000) libm.so.6 => /lib/libm.so.6 (0x00c96000) libc.so.6 => /lib/libc.so.6 (0x004a2000) /lib/ld-linux.so.2 (0x00742000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00a9f000)

    Read the article

  • Need Corrected htaccess File

    - by Vince Kronlein
    I'm attempting to use a wordpress plugin called WP Fast Cache which creates static html files from all your posts, pages and categories. It creates the following directory structure inside wp-content: wp_fast_cache example.com pagename index.html categoryname postname index.html basically just a nested directory structure and a final index.html for each item. But the htaccess edits it makes are crazy. #start_wp_fast_cache - do not remove this comment <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html [L] RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond %{QUERY_STRING} ^$ RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html [L] </IfModule> #end_wp_fast_cache No matter how I try and work this out I get a 404 not found. And not the Wordpress 404, and janky apache 404. I need to find the correct syntax to route all requests that don't exist ie: files or directories to: wp-content/wp_fast_cache/hostname/request_uri/ So for example: Page: example.com/about-us/ => wp-content/wp_page_cache/example.com/about-us/index.html Post: example.com/my-category/my-awesome-post/ => wp-content/wp_fast_cache/example.com/my-category/my-awesome-post/index.html Category: example.com/news/ => wp-content/wp_fast_cache/example.com/news/index.html Any help is appreciated.

    Read the article

  • PHPMyAdmin: "General relation features: Disabled"

    - by Simón
    I've been looking around for something like this for a while, and I've found some tips on similar issues, but not exactly the same. I really don't know what to do. I downloaded and installed WAMP, and I have a MySQL and PHPMyAdmin setup according to common indications that can be found everywhere (securing MySQL root account, etc.). When I log into PHPMyAdmin (either as root or as pma), I see the following message at the bottom of the page: The additional features for working with linked tables have been deactivated. To find out why click here. And when following the link, got a page with the following: Server: localhost $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... OK General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... OK Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... OK $cfg['Servers'][$i]['pdf_pages'] ... OK Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... OK Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... OK SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... OK Designer: Disabled Somebody please explain to me, why the heck if all settings are "OK" the features remain "Disabled"? Note: at first all the settings were "not OK" and I managed to add the settings to config.inc.php, and then created the tables using scripts/create_tables.php. Of course I have already tried restarting the server or clearing the browser cache (several times, so I am sure the problem comes elsewhere).

    Read the article

  • Intel NIC X540-T1 non-functional in Ubuntu Server 12.04

    - by Jeff Carr
    I have installed three Intel X540-T1's in servers running Ubuntu Server 12.04, but all are non-functional, no link lights, no packets sent or received, and no connection on ip4 or ip6 whether set up as dhcp or static. Also, dmesg doesn't detect cable connection or disconnection. I updated the default ixgbe driver to Intel's latest version (3.11.33) with no change. The ethernet controller is being reported as X540-AT2 (which might be a problem that I can't figure out how to fix), but the subsystem is X540-T1 so I believe that might be intended. Does anyone have any experience with this that could assist? ifconfig eth2 eth2 Link encap:Ethernet HWaddr a0:36:9f:14:5f:ea inet addr:192.168.101.1 Bcast:192.168.101.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1<br> RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ethtool -i eth2 driver: ixgbe version: 3.11.33 firmware-version: 0x8000037c bus-info: 0000:08:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes lspci -vvnns 08:00.0 08:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10 Gigabit X540-AT2 [8086:1528] (rev 01) Subsystem: Intel Corporation Ethernet Converged Network Adapter X540-T1 [8086:0002] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at e8000000 (64-bit, prefetchable) [size=2M] Region 4: Memory at e8200000 (64-bit, prefetchable) [size=16K] [virtual] Expansion ROM at e8280000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: ixgbe Kernel modules: ixgbe

    Read the article

  • FreeBSD ZFS RAID-Z2 performance issues

    - by Axel Gneiting
    I'm trying to build my own network attached storage based on FreeBSD+ZFS+standard components, but there are strange performance issues. The hardware specs are: AMD Athlon II X2 240e processor ASUS M4A78LT-M LE mainboard 2GiB Kingston ECC DDR3 (two sticks) Intel Pro/1000 CT PCIe network adapter 5x Western Digital Caviar Green 1.5TB I created a RAID-Z2 zpool from all disks. I installed FreeBSD 8.1 on that zpool following the tutorial. The SATA controllers are running in AHCI mode. Output of zpool status: pool: zroot state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz2 ONLINE 0 0 0 gptid/7ef815fc-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/80344432-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/81741ad9-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/824af5cb-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/82f98a65-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 The problem is that write performance on the pool is very very bad (<10 MB/s) and every application that is accessing the disk is unresponsive every few seconds when writing. It seems like writing is fine until the ZFS ark cache is full and then ZFS stalls the entire system I/O till it's finished writing that data. Also I'm getting kmem_malloc to small kernel panics. I've already tried to put vm.kmem_size="1500M" vm.kmem_size_max="1500M" into /boot/loader.conf, but it doesn't help. Does anyone know what's going on here? Am I really not having enough memory for ZFS to handle this RAID-Z2?

    Read the article

  • BIND9 server types

    - by aGr
    I was configuring DNS on my server using BIND9, everything seems to work, but I have a question regarding my config file. I've ended up with this configuration in /etc/bind/named.conf.local zone "example.com" { type master; file "/etc/bind/db.example.com"; allow-transfer { 192.168.1.1; }; }; zone "1.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; allow-transfer { 192.168.1.1; }; }; forwarders { 10.253.22.140; 10.253.22.141; }; I've read about the different type of dns server, like primary master etc. The first two parts (zone and zone) corresponds to primary dns server configuration. First record for "classic" lookup, second one for reverse. The last part (forwarders) is configuration of cache-server and contains the ISP's IP of DNS server. So all names resolved thanks to this server will be cached. Simple question: am I right? Does my description make sense? Or one server can be only either master or either cached?

    Read the article

  • can't load IA 32-bit .dll on a AMD 64 bit platform

    - by user101425
    I have a Windows 2003 64 bit terminal server which we run a Java application from. The application has always worked up until 2 days ago. No new updates have been installed to the server in that time frame. I have tried re-installing java 64 bit but still have the following error. Unexpected exception: java.lang.reflect.InvocationTargetException java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.sun.javaws.Launcher.executeApplication(Unknown Source) at com.sun.javaws.Launcher.executeMainClass(Unknown Source) at com.sun.javaws.Launcher.doLaunchApp(Unknown Source) at com.sun.javaws.Launcher.run(Unknown Source) at java.lang.Thread.run(Unknown Source) **Caused by: java.lang.UnsatisfiedLinkError: C:\Documents and Settings\administrator\Application Data\Sun\Java\Deployment\cache\6.0\19\625835d3-5826d302-n\swt-win32-3116.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform** at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:100) at org.eclipse.swt.internal.win32.OS.<clinit>(OS.java:18) at org.eclipse.swt.graphics.Device.init(Device.java:563) at org.eclipse.swt.widgets.Display.init(Display.java:1784) at org.eclipse.swt.graphics.Device.<init>(Device.java:99) at org.eclipse.swt.widgets.Display.<init>(Display.java:363) at org.eclipse.swt.widgets.Display.<init>(Display.java:359) at com.ko.StartKO.main(StartKO.java:57) ... 9 more

    Read the article

  • django : nginx : jquery css not being served

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • Cannot Resolve Host Or Access Website Through Router

    - by Boris_yo
    This is weird. I am on Windows XP with Edimax BR 6204Wg. I have 3 devices - 2 laptops and 1 smartphone. 1st laptop and smartphone are connected through WiFi to router and 2nd laptop is connected through LAN to router. Before firmware upgrade i did not try to access website but after firmware upgrade to latest version: http://www.edimax.eu/en/support_detail.php?pd_id=11&pl1_id=3#02 i had problems resolving host, pinging, tracerting and accessing website. Sometimes ping and tracert work but i cannot access website and sometimes i can access website but ping and tracert do not work. Weird? I downgraded to previous version and no changes. If i can no longer access that website through Internet Explorer, i can access it in Firefox. I tried deleting cookies, clearing cache and that seem not make difference. Switching LAN port did not make difference. When i disconnect router and connect laptop through LAN to internet modem, everything is normal. I tried resetting router, resetting to factory default settings and all did not help. At the moment i can access website on laptop connected through LAN from Firefox and Internet Explorer, but on my smartphone i can access website only with Opera but not with built-in browser and Skyfire. UPDATE: I just could only access with Internet Explorer but not with other browsers on my PC. Minutes later i could access with all browsers. But on smartphone i could only access with Opera and not with other browsers. I am confused. I also determined that sometimes i can access and sometimes can't. What is also weird is that when ping and tracert cannot resolve host, i still am able to access website.

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >