Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 508/717 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Filesystems for webserver with SATA and Solid State disk,

    - by Jorisslob
    We have just ordered a new webserver with 120 Gb solid state disk and a SATA disk. I am trying to plan ahead what sort of filesystem to use. This system will be running Linux, Apache/Tomcat to host java services. The main service is a system where people can upload reasonably large files (in the order of 100 Mb, images, image stacks and video), which people will be able to annotate and which will be sent to a database server when annotation is complete. Thus far, I plan to put most of the utility programs of the operating system om the SSD and put the large media files there. The SATA disks will hold the less volitile data like apache, tomcat and the servlets. For filesystems I have considered going for the stable EXT3 because I hear that it is best supported. The downside seems to be that it not the ideal choice for large files. That is why I am leaning towards using XFS for the SSD and EXT3 for the SATA. My questions are: 1) Does this sound like a reasonable setup? 2) What filesystems would you recommend for the SSD and for the SATA? Thanks

    Read the article

  • Centos CMake Does Not Install Using gcc 4.7.2

    - by Devin Dixon
    A similar problem has been reported here with no solution:https://www.centos.org/modules/newbb/print.php?form=1&topic_id=42696&forum=56&order=ASC&start=0 I've added and upgraded gcc to centos cd /etc/yum.repos.d wget http://people.centos.org/tru/devtools-1.1/devtools-1.1.repo yum --enablerepo=testing-1.1-devtools-6 install devtoolset-1.1-gcc devtoolset-1.1-gcc-c++ scl enable devtoolset-1.1 bash The result is this for my gcc [root@hhvm-build-centos cmake-2.8.11.1]# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/centos/devtoolset-1.1/root/usr --mandir=/opt/centos/devtoolset-1.1/root/usr/share/man --infodir=/opt/centos/devtoolset-1.1/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-ppl --with-cloog --with-mpc=/home/centos/rpm/BUILD/gcc-4.7.2-20121015/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC) And I tried to then install cmake through http://www.cmake.org/cmake/resources/software.html#latest But I keep running into this error: Linking CXX executable ../bin/ccmake /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: CMakeFiles/ccmake.dir/CursesDialog/cmCursesMainForm.cxx.o: undefined reference to symbol 'keypad' /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: note: 'keypad' is defined in DSO /lib64/libtinfo.so.5 so try adding it to the linker command line /lib64/libtinfo.so.5: could not read symbols: Invalid operation collect2: error: ld returned 1 exit status gmake[2]: *** [bin/ccmake] Error 1 gmake[1]: *** [Source/CMakeFiles/ccmake.dir/all] Error 2 gmake: *** [all] Error 2 The problem seems to come from the new gcc installed because it works with the default install. Is there a solution to this problem?

    Read the article

  • Unable to execute gs program: No such file or directory

    - by Imran
    I've setup CUPS + Avahi on my NAS box in order to enable AirPrint with my existing network printer. Printing a test page via CUPS and printing us lp works fine, and I am able to see my printer on the printer list on my iOS device. However when sending a print job from my iOS device the printer status is set to paused and doesnt print anything. When checking the error_logs I have found this line which I believe is causing the error. D [04/Sep/2012:03:20:25 +0100] [Job 11] Started filter gs (PID 7485) D [04/Sep/2012:03:20:25 +0100] [Job 11] Started filter pstops (PID 7486) D [04/Sep/2012:03:20:25 +0100] [Job 11] Set job-printer-state-message to "Unable to execute gs program: No such file or directory", current level=ERROR D [04/Sep/2012:03:20:25 +0100] [Job 11] PID 7485 (gs) stopped with status 1! D [04/Sep/2012:03:20:25 +0100] [Job 11] PID 7486 (pstops) stopped with status 1! D [04/Sep/2012:03:20:25 +0100] [Job 11] Backend returned status 1 (failed) D [04/Sep/2012:03:20:25 +0100] [Job 11] Printer stopped due to backend errors; please consult the error_log file for details. I have installed Ghostscript, so I'm not quite sure why its saying its unable to execute the program, unless there are configurations for GS that I havent set yet. Any ideas?

    Read the article

  • boot.ini Issue - Multi-boot System, Linux, XP and XP64 - Missing File in system32 Message

    - by nicorellius
    I have an interesting issue that has me stumped. Not that I'm a computer whiz or anything. I have a multi-boot system with two hard drives: one drive has CentOS and Windows XP 64-bit and the other drive has Windows XP 32-bit. CentOS grub boot loader works great, and I have it set to default to Windows. But this is the problem. My boot.ini file seems to be in order, yet it still gives an error if I choose the default OS (which, consequently, is XP32): Windows could not start because the following file is missing or corrput: (Windows root) \system32\ntoskrnl.exe. Please re-install a copy of the above file. But if I choose the actual boot ID, i.e., toggle to the Windows XP Pro selection it boots just fine. In the boot.ini file, the entry for XP 32 is the samee: [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows XP Pro" /noexecute=optin /fastdetect /usepmtimer [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows XP Pro" /noexecute=optin /fastdetect /usepmtimer multi(0)disk(0)rdisk(1)partition(2)\WINDOWS="Windows XP Pro x64" /noexecute=optin /fastdetect /usepmtimer What am I missing?

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • Access All VLANS over XenServer Interface

    - by Garrett
    For my current setup, I have a physical NIC on a XenServer machine that receives traffic tagged with various VLAN IDs. I have a virtual machine that is running Vyatta that needs to be able to access both tagged and untagged traffic in order to route traffic. Here's the problem: 1) If I bind the NIC in XenCenter to the VM (which has no VLAN ID associated with it), the VM cannot see any tagged traffic. I have verified this using tcpdump. However, the tagged traffic is flowing into the XenServer machine perfectly fine. 2) I have more than 7 VLANs, so adding each VLAN as an interface within XenCenter isn't an option. 3) Even though tcpdump shows no tagged traffic coming in the VMs NIC, I have tried adding VLAN interfaces within Vyatta. This also doesn't work. I have tried using both Linux bridge and openvswitch setups and neither seem to work. I am running XenServer 6.0.3 free and Vyatta VC6.3. Please help! I've run out of ideas. I've googled for hours and can't seem to find anything.

    Read the article

  • Arch Linux drops me on my school network.

    - by Kravlin
    I'm running a Lenovo X61 which i carry around my college for getting on the internet at various points in the day. The network has always been finicky but recently it's gotten worse. I'll connect using iwconfig, get an ip from dhcpcd and log in using vpnc to their system. Sometimes I'll stay connected for hours but most of the time within 30 seconds my network traffic will drop to zero and i'll be unable to do anything. My computer still belives it's connected, however to try again i need to put my wireless interface down, put it back up and try again. It's gotten so bad that i've got a window on my computer pinging yahoo or google constantly in order to know if i'm still able to get online. I know other people who have used Arch Linux that don't have the same problems as well as people who use Ubuntu who haven't had any problems either. It seems like my computer is a special case. Does anyone have any suggestions on how to fix it? dmesg doesn't show anything out of the ordinary going on and i don't know where else to look for errors or other things to try.

    Read the article

  • Problems migrating software RAID 5 to new server (linux)

    - by leleu
    I have a CentOS setup with sw RAID5 that holds my data. Well, the server died, so I bought another box to migrate my drives to. Only thing is, I cannot get the RAID array rebuilt (not even sure it needs rebuilding, might just need the /dev/md0 mapping created... but I don't even know how to determine what I need!) Some details: RAID5 software (used mdadm) 4x 250GB drives (2 are SATA, 2 are EIDE -- would this matter? It worked fine in the other box...) latest CentOS distro built using mdadm I've got a decent amount of experience with standard linux stuff, but the hardware level stuff runs me in circles. I've spent some time googling and elsewhere here on SF, so please be kind for my newbie questions :). My question is this: how can I diagnose the problem? For all I know, I'm using the wrong device blocks when I try to rebuild the array, but I can't find the command to display only the devices that have some physical attachment. Is there some simple way for me to run mdadm, having it scan over all my physical drives, and say "hey, drives 2,5,6,7 are a software array, want me to mount it?" I basically just took the drives from my old box and put it into my new one. They show up in the BIOS. What steps do I need to take in order to get the array up, running, and mounted? Thanks in advance!

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • Reverse Proxies and AJAX

    - by osij2is
    A client of ours is using IBM/Tivoli WebSEAL, a reverse-proxy server for some of their internal users. Our web application (ASP.NET 2.0) and is a fairly straightforward web/database application. Currently, our client users that are going through the WebSEAL proxy are having problems with a .NET 3rd party control. Users who are not going through the proxy have no issues. The 3rd party control is nothing more than an AJAX dynamic tree that on each click requests all the nodes for each leaf. Now our clients claim that once users click on a node in the control, the control itself freezes in such a way that they don't see anything populate. Users see "Loading..." message appear but no new activity there afterwards. They have to leave the page and go back to the original page in order to view the new nodes. I've never worked with a reverse proxy before so I have googled quite a bit on the subject even found an article on SF. IBM/Tivoli has mentioned this issue before but this is about all they mention at all. While the IBM doc is very helpful, all of our AJAX is from the 3rd party control. I've tried troubleshooting using Firebug but by not being behind the reverse proxy, I'm unable to truly replicate the problem. My question is: does anyone have experience with reverse proxies and issues with AJAX sites? How can I go about proving what the exact issue is? Currently we're negotiating remote access so assume for the greater part that I will have access to a machine that's using the WebSEAL proxy. P.S. I realize this question might teeter on the StackOverFlow/ServerFault jurisdictional debate, but I'm trying to investigate from the systems perspective. I have no experience with reverse proxies (and I'm unclear on the benefits) and little with forwarding proxies.

    Read the article

  • SSL Certifcate Request s2003 DC CA DNS Name not Avaiable.

    - by Beuy
    I am trying to submit a request for an SSL certificate on a Domain Controller in order to enable LDAP SSL, and having no end of problems. I am following the information provided at http://support.microsoft.com/default.aspx?scid=kb;en-us;321051 & http://adldap.sourceforge.net/wiki/doku.php?id=ldap_over_ssl Steps taken so far: Create Servername.inf with the following information ;----------------- request.inf ----------------- [Version] Signature="$Windows NT$ [NewRequest] Subject = "CN=servername.domain.loc" ; replace with the FQDN of the DC KeySpec = 1 KeyLength = 1024 ; Can be 1024, 2048, 4096, 8192, or 16384. ; Larger key sizes are more secure, but have ; a greater impact on performance. Exportable = TRUE MachineKeySet = TRUE SMIME = False PrivateKeyArchive = FALSE UserProtected = FALSE UseExistingKeySet = FALSE ProviderName = "Microsoft RSA SChannel Cryptographic Provider" ProviderType = 12 RequestType = PKCS10 KeyUsage = 0xa0 [EnhancedKeyUsageExtension] OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication ;----------------------------------------------- Create Certificate request by running: certreq -new Servername.inf Servername.req Attempt to submit Certificate request to CA by running: certreq -submit -attrib "CertificateTemplate: DomainController" request.req At which point I get the following error: The DNS name is unavailable and cannot be added to the Subject Alternate Name. 0x8009480f (-2146875377) Trouble shooting steps I have taken so far 1. Modify the Domain Controller Template to supply Subject Name in Request restart Certificate Service, include SAN in Request, same error. 2. Re-installed Certificate Services / IIS / Restarted machine countless times Any help resolving the issue would be greatly appreciated.

    Read the article

  • Reverse SSH Tunnel

    - by chris
    I am trying to forward web traffic from a remote server to my local machine in order to test out some API integration (tropo, paypal, etc). Basically, I'm trying to setup something similar to what tunnlr.com provides. I've initiated the ssh tunnel with the command $ssh –nNT –R :7777:localhost:5000 user@server Then I can see that server has is now listening on port 7777 with user@server:$netstat -ant | grep 7777 tcp 0 0 127.0.0.1:7777 0.0.0.0:* LISTEN tcp6 0 0 ::1:7777 :::* LISTEN $user@server:curl localhost:7777 Hello from local machine So that works fine. The curl request is actually served from the local machine. Now, how do I enable server.com:8888 to be routed through that tunnel? I've tried using nginx like so: upstream tunnel { server 0.0.0.0:7777; } server { listen 8888; server_name server.com; location / { access_log /var/log/nginx/tunnel-access.log; error_log /var/log/nginx/tunnel-error.log; proxy_pass http://tunnel; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } From the nginx error log I see: [error] 11389#0: *1 connect() failed (111: Connection refused) I've been looking at trying to use iptables, but haven't made any progress. iptables seems like a more elegant solution than running nginx just for tunneling. Any help is greatly appreciated. Thanks!

    Read the article

  • How to figure out how much RAM each prefork thread requires for maximum Wordpress performance on an EC2 small instance

    - by two7s_clash
    Just read Making WordPress Stable on EC2-Micro In the "Tuning Apache" section, I can't quite figure out how he comes up with his numbers for his prefork config. He explains how to get the numbers for an average process, which I get. But then: Or roughly 53MB per process...In this case, ten threads should be safe. This means that if we receive more than ten simultaneous requests, the other requests will be queued until a worker thread is available. In order to maximize performance, we will also configure the system to have this number of threads available all of the time. From 53MB per process, with 613MB of RAM, he somehow gets this config, which I don't get: <IfModule prefork.c> StartServers 10 MinSpareServers 10 MaxSpareServers 10 MaxClients 10 MaxRequestsPerChild 4000 </IfModule> How exactly does he get this from 53MB per process, with 613MB limit? Bonus question From the below, on a small instance (1.7 GB memory), what would good settings be? bitnami@ip-10-203-39-166:~$ ps xav |grep httpd 1411 ? Ss 0:00 2 0 114928 15436 0.8 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1415 ? S 0:06 10 0 125860 55900 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1426 ? S 0:08 19 0 127000 62996 3.5 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1446 ? S 0:05 48 0 131932 72792 4.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1513 ? S 0:05 7 0 125672 54840 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1516 ? S 0:02 2 0 125228 48680 2.7 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1517 ? S 0:06 2 0 127004 55796 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1518 ? S 0:03 1 0 127196 54208 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1531 ? R 0:04 0 0 127500 54236 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Problem with MS DTC on SQL2008 win server 2k8 with linked server from sql2000 win server 2k

    - by user31648
    Hi, We have migrated our db from sql2000 win server 2k to sql2008 win server 2k8. We have linked server from sql2000 win server 2k. By our opinion the problem is with DTC and we have made a lot of setting that we found as solution for our problem, but still the problem exist. There is no any error or worning or information niether in the sql log nor in win event viewer. The application is hanging out and at the end the time out exception is shown. What we have done till now: Enable Network DTC Access with inbound and outbound with No Authentication Required on win 2k8 We have opened RPC dynamic port allocation through registry on 2k and 2k8 We have entered subkey TurnOffRpcSecurity in the registry HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSDTC and made it enable on 2k and 2k8 We have added exception for DTC in firewall for all entities What we have notice that when we restart SQL service and make the first try for our transaction the following is shown: "Attempting to initialize Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required." and after it: "Recovery of any in-doubt distributed transactions involving Microsoft Distributed Transaction Coordinator (MS DTC) has completed. This is an informational message only. No user action is required." Does someone have any idea what else can be done in order to solve the problem? Thanks in advance. Regards, Snezana

    Read the article

  • How to boot Linux from a 16gb USB flash drive

    - by Chris Harris
    I'm trying to install Linux on a single partition of a USB flash drive that's larger than 4gb. The first place I went to is http://pendrivelinux.com. I can follow these instructions for installing Xubuntu 9.04 perfectly, which unfortunately break down when I try to scale it up beyond 4gb. There are several other tools to do this (unetbootin and usb-creator) which follow a very similar formula. I figured out that a big problem of mine was that all of these tools assume the USB drive is formatted in FAT32, which unfortunately cannot hold a single file larger than 4gb. This is unfortunate because I want to use just one partition, so that my persistance file, casper-rw, looks like one big partition to the OS once I've booted off of the USB drive. I then tried following a myriad of instructions involving formatting the drive as one large ext2 filesystem and using extlinux to create a single bootable ext2 file system. This doesn't work for me however, after about 20 attempts verifying and slightly tweaking the formula, I cannot seem to get a "good" bootable ext2 file system built. I'm not entirely sure what's going on, but it seems as though no matter how hard I try, I cannot get the ext2 file system to remain coherent after copying the Linux ISO contents over, copying the MBR, and executing extlinux to create the ext bootloader. Every time, after I follow these steps (in any order) and reboot, I get an unbootable USB drive. If I then mount the drive under Linux again, I see a mess of a file system (inodes have clearly been screwed up somewhere along the way). I suspected that the USB drive wasn't being fully flushed, so I tried using the "sync" and "unmount" commands before rebooting which didn't affect things at all. I guess I have several possible questions - but let's start with the obvious - is there something I'm missing to create a bootable ext2 USB flash drive that's large (e.g. 16gb)?

    Read the article

  • Apache not Forwarding Client x509 Certificate to Tomcat via mod_proxy

    - by hooknc
    Hi Everyone, I am having difficulties getting a client x509 certificate to be forwarded to Tomcat from Apache using mod_proxy. From observations and reading a few logs it does seem as though the client x509 certificate is being accepted by Apache. But, when Apache makes an SSL request to Tomcat (which has clientAuth="want"), it doesn't look like the client x509 certificate is passed during the ssl handshake. Is there a reasonable way to see what Apache is doing with the client x509 certificate during its handshake with Tomcat? Here is the environment I'm working with: Apache/2.2.3 Tomcat/6.0.29 Java/6.0_23 OpenSSL 0.9.8e Here is my Apache VirtualHost SSL config: <VirtualHost xxx.xxx.xxx.xxx:443> ServerName xxx ServerAlias xxx SSLEngine On SSLProxyEngine on ProxyRequests Off ProxyPreserveHost On ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /usr/local/certificates/xxx.crt SSLCertificateKeyFile /usr/local/certificates/xxx.key SSLCertificateChainFile /usr/local/certificates/xxx.crt SSLVerifyClient optional_no_ca SSLOptions +ExportCertData CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass / https://xxx.xxx.xxx.xxx:8443/ ProxyPassReverse / https://xxx.xxx.xxx.xxx:8443/ </VirtualHost> Then here is my Tomcat SSL Connector: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" address="xxx.xxx.xxx.xxx" maxThreads="150" scheme="https" secure="true" keystoreFile="/usr/local/certificates/xxx.jks" keypass="xxx_pwd" clientAuth="want" sslProtocol="TLSv1" proxyName="xxx.xxx.xxx.xxx" proxyPort="443" /> Could there possibly be issues with SSL Renegotiation? Could there be problems with the Truststore in our Tomcat instance? (We are using a non-standard Truststore that has partner organization CAs.) Is there better logging for what is happening internally with Apache for SSL? Like what is happening to the client cert or why it isn't forwarding the certificate when tomcats asks for one? Any reasonable assistance would be greatly appreciated. Thank you for your time.

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • Mod_Proxy_AJP set up issues

    - by TripWired
    I'm trying to set up Tomcat behind Apache using mod_proxy_ajp. After tons of messing around with the configs I am stuck at a 403 page when trying to access tomcat. I had a 404 before but apparently something I changed along the way fixed that. I'm not sure which setting to change at this point. Could anyone look over the configs I have and see if anything is missing. httpd.conf <IfModule mod_proxy.c> ProxyRequests Off <Proxy *> Order deny,allow Deny from all Allow from localhost </Proxy> proxy_ajp.conf LoadModule proxy_ajp_module modules/mod_proxy_ajp.so # # When loaded, the mod_proxy_ajp module adds support for # proxying to an AJP/1.3 backend server (such as Tomcat). # To proxy to an AJP backend, use the "ajp://" URI scheme; # Tomcat is configured to listen on port 8009 for AJP requests # by default. # # # Uncomment the following lines to serve the ROOT webapp # under the /tomcat/ location, and the jsp-examples webapp # under the /examples/ location. # ProxyPass /tomcat ajp://127.0.0.1:8009/ ProxyPassReverse /tomcat ajp://127.0.0.1:8009/ ProxyPass /examples/ ajp://localhost:8009/jsp-examples/

    Read the article

  • Use icacls to make a directory read-only on Windows 7

    - by Dave G
    I'm attempting to test some filesystem exceptions in a Java based application. I need to find a way to create a directory that is located under %TMP% that is set to read-only. Essentially on UNIX/POSIX platforms, I can do a chmod -w and get this effect. Under Windows 7/NTFS this is of course a different story. I'm running into multiple issues on this. My user has "administrative" right (although this may not always be the case) and as such the directory is created with an ACL including: NT AUTHORITY\SYSTEM BUILTIN\Administrators <my current user> Is there a way using icacls to essentially get this directory into a state where it is read-only PERIOD, do my test, then restore the ACL for removal? EDIT With the information provided by @Ansgar Wiechers I was able to come up with a solution. I used the following: icacls dirname /deny %username%:(WD) In the page located here I found this in the remarks section: icacls preserves the canonical order of ACE entries as: * Explicit denials * Explicit grants * Inherited denials * Inherited grants By performing the above icalcs command, I was able to set the current user's ability to write or append files (WD) to the directory to deny. Then it was a question of returning it to a state post test: icacls dirname /reset /t /c Done

    Read the article

  • Ingress filtering in Linux traffic control: Redirect traffic to IFB device

    - by Dani Camps
    I have an openwrt router and I want to shape incoming traffic in order to classify all the traffic addressed to a certain IP address in my home network as low priority. For that purpose I want to redirect all traffic incoming to the eth1 interface, the one connected to the DSL modem, to an IFB device where I will do the shaping. These are the details of my system: Linux OpenWrt 2.6.32.27 #7 Fri Jul 15 02:43:34 CEST 2011 mips GNU/Linux Here is the script I am using where the last instruction is failing: # Variable definition ETH=eth1 IFB=ifb1 IP_LP="192.168.1.22/32" DL_RATE="900kbps" HP_RATE="890kbps" LP_RATE="10kbps" TC="tc" # Configuring the ifbX interface insmod ifb insmod sch_htb insmod sch_ingress ifconfig $IFB up # Adding the HTB scheduler to the ingress interface $TC qdisc add dev $IFB root handle 1: htb default 11 # Set the maximum bandwidth that each priority class can get, and the maximum borrowing they can do $TC class add dev $IFB parent 1:1 classid 1:10 htb rate $LP_RATE ceil $DL_RATE $TC class add dev $IFB parent 1:1 classid 1:11 htb rate $HP_RATE ceil $DL_RATE # Redirect all ingress traffic arriving at $ETH to $IFB $TC qdisc del dev $ETH ingress 2>/dev/null $TC qdisc add dev $ETH ingress $TC filter add dev $ETH parent ffff: protocol ip prio 1 u32 \ match u32 0 0 flowid 1:1 \ action mirred egress redirect dev $IFB The last instruction fails with: Action 4 device ifb1 ifindex 9 RTNETLINK answers: No such file or directory We have an error talking to the kernel Does anyone know what am I doing wrong ? Best Regards Daniel

    Read the article

  • Browsing Audiobooks on an iPod Nano

    - by Electrons_Ahoy
    The sitation: I've got a stack of audiobooks in MP3 format in my iTunes library, and both an iPhone and an iPod Nano. After this question, I've changed the Media Kind for the audiobook MP3s from Music to Audiobook. This has been, overall, spectacular, as now I can resume where I was, they show up under Audiobooks, etc. On the iPhone, it's also super convenient, since the interface shows what used to be an album with multiple songs as a single book with multiple chapters, and going into "Audiobooks" presents me with a list of books, not tracks. The Nano, on the other hand, is a little strange. After changing the media type and re-syncing the ipod, the files in question are now listed under Audiobooks rather than Music, and the extra Audiobook features are present (resume playback and so on,) but the Audiobooks menu just lists all the MP3 tracks on the iPod in alphabetical order, ignoring whichever book/album they belong to - and doesn't seem to let me browse them any other way. This is, clearly, a little sub-optimal. Did I screw something up? How do I get the Nano to treat the files in a similar way to the way the iPhone and iTunes does - as books with chapters? Is there a step I missed somewhere? Do I need to reformat the iPod? Is this even possible? (Footnote: shameless bump, since this just scored me a tumbleweed.)

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • Passenger error: No such file or directory - config/environment.rb

    - by JJD
    I installed Redmine on MacOSX Server 10.6.8 according to this installation description. So far everything works fine: When I start webrick the server serves the Redmine pages. The gems and redmine are installed under the user "redmine". After that I aimed configuring apache2 with passenger as described here. As suggested by the description I also installed the passenger-pane which stores its virtual host configuration files in /private/etc/apache2/passenger_pane_vhosts. This is what I came up with after a lot of manual try and error. At least, now I can reach a passenger error page. // redmine.vhost.conf <VirtualHost *:80> ServerName host ServerAlias localhost DocumentRoot "/Users/redmine/Sites/redmine" # RackEnv production # RackBaseURI / RailsEnv production RailsBaseURI / # PassengerUser www-data # PassengerGroup www-data <Directory "/Users/redmine/Sites/redmine"> Order allow,deny Allow from all </Directory> </VirtualHost> However, the passenger module still runs into the following errors. Error message: No such file or directory - config/environment.rb The /var/log/apache2/error_log of the web server stated the following. [warn] NameVirtualHost *:80 has no VirtualHosts [notice] Apache/2.2.21 (Unix) Phusion_Passenger/3.0.12 configured -- resuming normal operations [ pid=21824 thr=2151905620 file=utils.rb:176 time=2012-06-01 18:22:07.126 ]: *** Exception Errno::ENOENT in PhusionPassenger::ClassicRails::ApplicationSpawner (No such file or directory - config/environment.rb) (process 21824, thread #<Thread:0x0000010086f2a8>): I experimented with the user switch functionality of passenger as described in the documentation - as you can tell from my configuration file. Though, I was not successful.

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >