Search Results

Search found 21053 results on 843 pages for 'out of process'.

Page 657/843 | < Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • Server 2008 Print Redirection is failing but only on 16Bit apps

    - by ian
    the main programmer for SoEasyAccounting and we are installing to Server 2008 Standard service pack 1. We install to 2003 with no problems. Important to understand that the print failure only happens in certain circumstances: Note: We use a standard Windows printer selection box to choose the printer Terms used Superbase = a program language that uses ntvdm.exe (Windows process hosting 16 bit apps) Local Printer = printing to a driver loaded onto the Server 2008 Redirect Printer = printing to a automatically established remote printer through an RDP connection Printing Scenarios Server 2008 - 1: Print from notepad to a Redirected Printer = works Server 2008 - 2: Print from Superbase to a Local Printer = works Server 2008 - 3: Print from Superbase to a Redirected Printer = fail Server 2003 - 4: Print from Superbase to a Redirected Printer = works Results The print causes a message in the drivers print queue of Local Downlevel Document, no print though and Superbase recognises that the "Print command failed". Eventvwr has no related issues to the print fail Any help greatly appreciated. So far 2 days spent trying to resolve and here goes my weekend :( unless someone has an idea :) Things I have Tried i. Switching on/off Easy print ii. Loading copy of redirected driver on server

    Read the article

  • Application for time and project management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • Windows ACL inheritance issues for FTP server and automated tools

    - by Martin Sall
    I have set up Cerberus FTP server. By default, Cerberus FTP service runs under SYSTEM ACCOUNT. Also I have some console applications which run as scheduled tasks. They are running under a dedicated "Utilities" user account which has "Log on as batch job" permissions. These console applications take uploaded FTP files, process them and then move them to some dedicated archive folder. The problem is that my console apps are throwing Security exceptions when trying to acces the uploaded files. I tried to give the Full control permissions on the ftproot folder for my "Utilities" account and I have checked that "Replace all Child object permissions with inheritable permissions from this object" checkbox, but it affects only current files. When new files are uploaded, they again are not accessible by my "Utilities" account. I tried to go another way and put Cerberus FTP service under "Utilities" account. Then I also needed to give "Utilities" account permissions on Cerberus Data folder in ProgramData. Still no luck - after this operation, Cerberus internal SOAP web service stopped working (although everything else seems to work). I need that SOAP service to be available, so running the Cerberus FTP under "Utilities" account seems to be not an option. Unless I find out, what else do I need to set up for that "Utilities" account to stop Cerberus from complaining. I guess, Cerberus is uploading files to some temporary folder and so those files get the permissions form that folder and keep the same permissions even after moved to the ftproot. What would be the right solution for this which would grant Cerberus FTP server and the "Utilities" account minimal needed permissions to access the contents of the ftproot folder?

    Read the article

  • Solaris Mysql Failure and Unable to Restart

    - by Iscariot
    Environment: Solaris 10 This mysql server has been up and running for 6 months now. Today all of a sudden it crashed. When typing 'mysql' as user it gives the error MYSQL" Error 2002 (HY000): Can't Connect to Local MySQL server though socket '/tmp/mysql.sock' when typing mysql as root it says mysql: not found. The server try to open mysql, it stays open for 9-10 seconds and restarts the process. Below are the application logs. Application-database-mysql_mysql-csk.log [ May 30 22:37:52 Enabled. ] [ May 30 22:37:58 Rereading configuration. ] [ May 30 22:37:59 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:37:59 Method "start" exited with status 0 ] [ May 30 22:38:13 Stopping because all processes in service exited. ] [ May 30 22:38:13 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:13 Method "stop" exited with status 0 ] [ May 30 22:38:13 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:38:13 Method "start" exited with status 0 ] [ May 30 22:38:25 Stopping because all processes in service exited. ] [ May 30 22:38:25 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:25 Method "stop" exited with status 0 ] I am hoping someone might have run into this before and might know how to fix it.

    Read the article

  • Desktop PC does not power up on power button

    - by hIpPy
    When I press the power button on my desktop, it does not power up completely. Before I press the power button, I see lights on the motherboard. Everything is normal. On power button press, the fans on the cpu, graphics card and motherboard start to spin a little for a second or two and then they stop. No beeps during this process. It has been doing this for a while now but it used to start up after some trials. Once it starts up, I have NO issues at all like random shutdowns so it is not an issue with OS. Update: I left the desktop off for a few days and it started. I'm just guessing here but it seems as if the PSU (Antec TP2-550ATX) is dying out and does not have enough power now - just a guess. It's an old desktop assembled in 2005 but I have maintained it well. Update: I always keep the desktop running and I never shut it down. During updates or manual restarts, it powers up without issues. I wonder if this sheds lights on the issue. Any idea how I can narrow down the issue? ex: if I can find if the PSU is dying etc. I'd really like to fix the issue. Please help. Thanks. Below is the complete configuration. DFI LAN-Party UT NF4 Ultra-D 6/23 {6.70}, Evercool EC-VC-RE 41/47C, AMD Opteron 170 2.0GHz {1.3.2.16} 1.312V 36/41C, ThermalRight SI-120, Panaflo 120×38mm OCZ Platinum 2×1GB 200MHz 2.66V 3-3-2-7 1T XFX 7800GTX 256MB 475/1250MHz {91.31}, Zalman VF900 Cu led 41/56C WD Caviar 320GB 7200RPM 16MB SATA 3Gb/s Antec TP2-550ATX Antec P180 WinXP sp3 Logitech MX310 Razer Mantis Speed BenQ FP91G+ 19" LCD 8ms DVI Creative Audigy2 ZS {4.42} BenQ DW1640 Logitech z-5300e 5.1 280W Legend: Driver versions: {} User settings: [] Voltage: V Wattage: W Temperature: C (Celsius) min/max

    Read the article

  • Can't install mysql 5.1 on a windows machine because the last install left artifacts.

    - by Zombies
    After uninstalling mysql 5.1 (64 bit version) I cannot install the win32 version! Apparently the devs felt it neccasery to leave helpful artifacts behind? I have rebooted my machine but no effect.. Running this: C:\Users\User1>net start mysql The MySQL service is starting. The MySQL service could not be started. A system error has occurred. System error 1067 has occurred. The process terminated unexpectedly. And ran this: C:\Program Files (x86)\MySQL\MySQL Server 5.1\bin>mysqld --console 100213 10:52:58 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Error: log file .\ib_logfile0 is of different size 0 10485760 bytes InnoDB: than specified in the .cnf file 0 25165824 bytes! 100213 10:52:59 [ERROR] Plugin 'InnoDB' init function returned error. 100213 10:52:59 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100213 10:52:59 [ERROR] Unknown/unsupported table type: INNODB 100213 10:52:59 [ERROR] Aborting 100213 10:52:59 [Note] mysqld: Shutdown complete Update: For some reason it looks like it is installing the 32bit DB into the old 64bit directoy.... will look into this... (the bin directory is going into the 32 bit program files directory).

    Read the article

  • Connect to Nonencrypted Wireless Network Using Ubuntu Commands

    - by Tim
    I failed to connect to an open i.e. nonencrypted wireless network using Ubuntu command lines. Here is what I did: $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo /sbin/ifconfig wlan0 up $ sudo iwconfig wlan0 essid "Cavalier High-Speed 866-4-CAVTEL" $ sudo dhclient wlan0 There is already a pid file /var/run/dhclient.pid with pid 10812 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on Socket/fallback DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 8 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 21 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 6 No DHCPOFFERS received. Trying recorded lease 192.168.1.67 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms Trying recorded lease 192.168.1.45 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms No working leases in persistent database - sleeping. $ sudo /sbin/iwconfig wlan0 wlan0 IEEE 802.11bg Mode:Managed Frequency:2.422 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I was wondering what the problem is and how I can do it right? Thanks and regards!

    Read the article

  • Daisychain external USB drive to WD My Book World Edition (Blue rings)

    - by d03boy
    I recently bought a new My Book Essential 1TB (WDBACW0010HBK-NESN) to daisychain to my older My Book World Edition 500GB (blue rings) with one of the version 01.xx.xx firmwares. At first when I connected the USB drive to the MBWE, it showed up in the System Summary section of the administration page without any issue. I was able to set up a new share on the new drive. The administration website moved very, very slow though. The administration pages became nearly unresponsive during this setup process. Once the share was set up I could access the new share but again, it was very slow, now through Windows Explorer. I looked around the internet and it seems that this is caused by the USB drive being formatted with NTFS. I tried reformatting it (again, as NTFS) just to double check and the same problem occurred. I then tried FAT32 but realized it would only support files of approximately 2GB and that is not acceptable for me. I decided to try a firmware upgrade on the MBWE to version 02.00.19. The firmware upgrade completed successfully but now the MBWE does not display the USB drive in the System Summary like it did with the earlier firmware version. The USB drive works perfectly fine when connected directly to my computer. Is there a way to solve this issue?

    Read the article

  • Linux: can't downgrade ATI graphic driver

    - by java.is.for.desktop
    I had the old 10.9 driver of ATI, it worked fine. Sadly, I decided to upgrade it to the new 10.12. After that, HDMI stopped working. So, I want to downgrade back to 10.9. I did everything by the book. But it fails: As root, I do: cd /usr/share/ati sh ./fglrx-uninstall.sh Yes, the uninstall process tells me that everything is fine uninstalled. And after doing reboot, I have the default opensource ATI driver which comes with X11 (or the kernel?). The ATI control panel shortcut points to nowhere. So, the driver seems uninstalled. Now I install the older (10.9) proprietary ATI driver. It also finishes successfully. But after reboot, the ATI control panel tells me that I still have 10.12. And it seems to be true, because HDMI doesn't work. So, how can I completely purge the new non-working driver and downgrade to the old one?

    Read the article

  • Drupal 7: One-time user account

    - by Noob
    I'm going to create a survey in Drupal 7 with the webform module, installed on a debian system which may be adapted in every way. The users (personally known, approx. 120) doing that survey will walk into a room and complete the survey in browsers on different computers. After that, they'll leave the room and other persons will enter, complete the survey on the same computers and so on. Each user may enter only one submission. The process needs to be anonymous, i. e. I mustn't have any idea of who did wich submission. My current solution is to generate random one-time-passwords and hand out one password per user (without noting who got which password). Within the survey there will be a password field where the one-time-password is entered. The value is checked by webform to be unique. I'll get the data via csv or Excel and verify the passwords manually in excel by comparing them to the list of valid passwords. The problem is: I don't like the idea of manually generating the password list, copying it to excel and doing a manual check. That's a good idea for one-time-use, but we're going to repeat the survey every once in a while. I'd rather generate one-time-logins (like user0001/fdlkjewf, user0002/dfrefnnr, ...) for each survey, hand them out to the users and let drupal/debian/whatever check whether a submission is valid or not. Do you have any idea how to batch-generate about 120 users with one-time-passwords in Drupal 7 and verify that each user may submit the form only once? Do you even have a better idea how to accomplish the task within the intranet? Thank you for your help.

    Read the article

  • How can I backup entire installations of a program, instead of just manually backing up individual f

    - by NoCatharsis
    It seems pretty straightforward to backup individual files, such as pictures, saved games, or settings files - just copy them straight over to your 2nd HDD or to an online service like DropBox. However, is there any way to backup entire installations of a program? For instance, my Firefox directory has a lot of personal customizations and add-ons. I don't want to go through each item and decide to back it up or let it go. So my next option is to copy out the entire directory for backup. But, if I copy the entire directory back onto the HD after a format, it is not an integrated installation and this seems like it could be troublesome. I would assume Windows cannot detect the directory for uninstallation, or would not let you choose Firefox as your default browser, right? I'm no pro, but this sounds like a bad idea. So my question is whether there is a good way to preserve all necessary files, while also preserving the full installation process of an application. This is not specific to Firefox - I would like to know how to do this for any application. Thank you.

    Read the article

  • Problem Uninstalling Microsoft Internet Explorer

    - by Roger F. Gay
    On Windows Vista Home Premium (x64) I am trying to uninstall Microsoft Internet Explorer. The procedures explained all over the web involve going through the control panel to Programs and Features. If MSIE is listed there, then uninstall in the usual way. If it is not listed there, click Turn Windows features on and off and deactivate it there. But Internet Explorer is not listed in either place. Background: I initiated some process in MSIE a couple of months ago that caused all web pages to no longer save login information or remain logged in when requested. As you can tell from the way I described that, I don't remember what it was and have no way to simply reverse it. I had a few problems with .NET Framework as well. So, I've uninstalled all browsers except MSIE and uninstalled .NET Framework. I've reinstalled .NET Framework and all other browsers. I have not been able to uninstall MSIE. Have Tried: I tried installing over the existing installation, but auto-update must be keeping it nicely up to date. The attempt simply produced an information window telling me that my current version is more up-to-date than the new version I tried to install.

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • Can't install xclip on Ubuntu 10.10

    - by wildster
    I'm trying to load an SSH key to Github from a new machine and this command is not working: sudo apt-get install xclip Reading package lists... Done Building dependency tree Reading state information... Done Package xclip is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package xclip has no installation candidate when I try: sudo aptitude install xclip Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done No candidate version found for xclip No candidate version found for xclip The following partially installed packages will be configured: synaptics-dkms 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Writing extended state information... Done Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Any idea how I can install this? Mucho thanks in advance

    Read the article

  • Outlook 2010 on WinXP runs once then refuses to run again until reboot

    - by msorens
    Since I installed Outlook 2010 on a new machine (WinXP Pro SP3) a couple months back I have had an issue that is quite annoying: If I close Outlook then attempt to restart it I get a small pop-up saying only: "Cannot start Microsoft Outlook". I found one workaround, but not a terribly practical one: reboot. If I reboot then launch Outlook, it opens fine. Here is what I know: Since I can run Outlook just fine after a reboot, I do not see that a system restore, an OS reinstall, or the like would help. I tried "outlook.exe /resetnavpane" and "outlook.exe /safe" but those give the same error. There are no entries in the event log. There is no instance of Outlook appearing in the process list once I close the program, so it does not seem to be an alias for "outlook is already running". As far as I have found, my situation is unique among reports of similar incidents: I have uncovered no other reports saying Outlook would run fine the first launch or that a reboot would again allow it to run. Suggestions?

    Read the article

  • What is the quickest and safest way to test new software and revert all changes, if needed?

    - by calbar
    I'm looking for Windows software that will allow me to quickly create a "checkpoint", do whatever I might need to do to my computer - install programs/drivers/updates, create/delete personal files, reboot the system multiple times, open questionable attachments - and then revert the entire system back to when the checkpoint was created. Essentially I want Windows Restore Points that save my personal files and partitions, too. It sounds like disk imaging might be the ticket, but creating them is much too slow and the restore process too involved... I'm hoping to sacrifice full disaster recovery for speed. Creating a checkpoint should be as close to one-click as possible, and rolling back should be a matter of selecting a restore point and rebooting. Ding! I'm familiar with Sandboxie, True Image Home "Try and Decide", Returnil, and a number of other "virtual system" apps that actively "catch" changes and allow you to commit or reject them. I'm not interested in these for a number of reasons - I prefer the "cut and dry" restore point approach. Finally, I'll note that I've just recently become aware of Comodo Time Machine. It sounds absolutely perfect, however, a quick skim through the user forums show more than a few horror stories of corrupted, unbootable systems. Any positive personal experience with the software to suppress my superstitions, or suggestions for more established alternatives would be greatly appreciated - Comodo Time Machine seems relatively new. Thanks for your help!

    Read the article

  • KeePass lost password and/or corruption due to Dropbox/KeePassX

    - by GummiV
    I started using Keepass about a month ago to hold my passwords and online accounts info. Everything was stored in a single .kdb file, only protected with a password. I'm using Windows 7. Now Keepass can't open my .kdb file with the error "Invalid/wrong key". I'm fairly confident I have the right password. Altough I might have mixed up a few letters I've tried about two dozen different combinations to minimize that possibility - but can't rule it out though. My guess is however that the .kdb file got corrupted, either due to Dropbox syncing (only using it on one computer though) or because I edited the file using KeePassX on Ubuntu (dual boot on the same computer, accessing a mounted Win7 NTFS partition), or possibly a combination of both. I have tried restoring older versions(even the original one) from Dropbox and trying out all possible passwords without any luck. (which does seem to rule out KeePassX as the culprit, since oldest copies are before I edited the file from Ubuntu) I have tried opening the file with the "Repair KeePass Database file" which always gives the "0xA Invalid/corrupt file structure" (the same error for when a wrong password is typed). I was wondering if there was any way for me to salvage my hard-gathered data. I know generally that brute force cracking is not feasible, but since I can remember probably more than half of the usernames/passwords, any maybe the fact that one of them does come up fairly often (my go-to pass for trivial stuff), that might simplify the brute force process to a doable time frame. Maybe the brute-force thing might incorporate the fact that I know the password length and what characters it's made from. (If we assume corruption, not a password-blackout on my part) I could do some programming if there are any libraries or routines that I could use. Other people seem to have had a similar probem http://forums.dropbox.com/topic.php?id=6199 http://forums.dropbox.com/topic.php?id=9139 http://www.keepassx.org/forum/viewtopic.php?t=1967&f=1 So hopefully this question will become a suitible resource for people when searching the web. Feel free to tell me if you think this should rather be a community wiki.

    Read the article

  • ubuntu 9.10 installer doesn't recognize the hard drive

    - by dan
    I downloaded Ubuntu 9.10 x86_64 and am trying to install it on a fairly modern system with a Gigabyte GA-MA770-UD3 motherboard. Ubuntu 9.04 installed fine and still will when I stick that disc in, but 9.10 doesn't see my hard drive (western digital 250GB). If I boot from the disc, I can install gparted and it does recognize the drive, but when I try to start the install process from the live disc, Ubuntu again doesn't recognize the hard drive. I checked /var/log/messages and see this: Nov 12 17:28:08 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was bad, boot with 'nodmraid'. Nov 12 17:28:08 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: no raid sets and with names: "nvidia_ciiajheb-0" Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. I checked my BIOS, SATA is enabled and is set to IDE mode, so there shouldn't be software RAID, but nonetheless, I added nodmraid to the boot line and tried again. It still doesn't recognize the drive. I checked /var/log/messages again and now see this: Nov 12 17:49:38 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was boad, boot with 'nodmraid'. Nov 12 17:49:38 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Any ideas on things to try? I've tried all of the various BIOS settings for SATA. IDE,RAID, etc. Nothing seems to work.

    Read the article

  • Configuring wsgi for a simple Python based site

    - by jbbarnes
    I have an Ubuntu 10.04 server that already has apache and wsgi working. I also have a python script that works just fine using the make_server command: if __name__ == '__main__': from wsgiref.simple_server import make_server srv = make_server('', 8080, display_status) srv.serve_forever() Now I would like to have the page always active without having to run the script manually. I looked at what Moin is doing. I found these lines in apache2.conf: WSGIScriptAlias /wiki /usr/local/share/moin/moin.wsgi WSGIDaemonProcess moin user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup moin And moin.wsgi is as listed: import sys, os sys.path.insert(0, '/usr/local/share/moin') from MoinMoin.web.serving import make_application application = make_application(shared=True) QUESTION: Can I create a similar section in apache2.conf pointing to another wsgi file? Like this: WSGIScriptAlias /status /mypath/status.wsgi WSGIDaemonProcess status user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup status And if so, what is required to convert my simple_server script into a daemonized process? Most of the information I find about wsgi is related to using it with frameworks like Django. I haven't found a simple howto detailing how to make this work. Thanks.

    Read the article

  • 'Bug in Mailman version 2.1.12'

    - by davorg
    I'm working on setting up a server running Plesk 10.4.4 Update #13 on Centos 6.2. I've configured Mailman and now I want to set up some mailing lists. I've created a list in the Plesk control panel, but when I try to administer the new list (by visiting http://lists.[domain].com/mailman/admin/[listname] I see the following error: Bug in Mailman version 2.1.12 We're sorry, we hit a bug! Please inform the webmaster for this site of this problem. Printing of traceback and other system information has been explicitly inhibited, but the webmaster can find this information in the Mailman error logs. I see exactly the same error if I try to go to the list info page at http://lists.[domain].com/mailman/listinfo/[listname]. I would follow the instructions and look in the error logs, but I can't find them. I would expect to find a file at /var/log/mailman/error, but there's nothing there. My test list seems to work correctly. It sends all the expected email. It's just the web pages for the list that seem to be broken. Has anyone else seen this? Any suggestions for tracking down and fixing the problem? p.s. I think I've chosen the correct Stack Exchange site, but it this question would be better asked elsewhere, please let me know. Update: I got to the bottom of this, so I'm documenting the answer in case anyone else has the same problem. The fact that I couldn't find the error log was the clue. The problem was that the Mailman process didn't have permissions to create an error log. And it seems that if Mailman can't create an error log then it will respond to any web request with this error page. Creating an error log file (in /var/log/mailman/error) and giving it the correct permissions fixed the problem.

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • Preventing applications from performing run once tasks for multiple users

    - by JohnyV
    In our environment we have several applications that are installed that have a need to run a little prompt the first time they run eg Media player, Google earth etc. The problem is we have many users on many different computers. And the computers have deepfreeze running on them which removes the users profile once the computer is restarted. So next time that user logs in they have to go through the whole thing of run once again. I have managed to prevent IE runonce using group policy and office run once from using the office customisation tool. Is there a way to make this happen for other applications. On windows xp we used to copy a user that has run all the apps and place their default profile into the default profile so that new users get that profile template. Now with windows 7 the process of copy profiles is not as easy. Is there an easy way to copy profiles in win 7 or is there a better way (eg modify reg or app data) to prevent apps from performing an initial run. Thanks

    Read the article

  • How to install 64bit version of Mongodb

    - by slownage
    How can I install the 64bit (x86_64) version of MongoDB? I've specified in the 10gen.repo the 64bit: baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64 But when I run: yum install mongo-10gen mongo-10gen-server It's the 32bit (see the i686) that it's set to be installed. Failed to set locale, defaulting to C Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.fdcservers.net * epel: mirror.steadfast.net * extras: mirror.fdcservers.net * rpmforge: mirror.rit.edu * updates: mirror.fdcservers.net 10gen | 951 B 00:00 Not using downloaded repomd.xml because it is older than what we have: Current : Tue Oct 30 15:55:02 2012 Downloaded: Tue Oct 30 15:54:51 2012 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package mongo-10gen.i686 0:2.2.1-mongodb_1 will be installed ---> Package mongo-10gen-server.i686 0:2.2.1-mongodb_1 will be installed --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================================== Package Arch Version Repository Size ====================================================================================================================================================== Installing: mongo-10gen i686 2.2.1-mongodb_1 10gen 42 M mongo-10gen-server i686 2.2.1-mongodb_1 10gen 6.5 M Transaction Summary ====================================================================================================================================================== Install 2 Package(s) Total download size: 48 M Installed size: 118 M I think I know why it want's to install the 32bit version: the first time I've made the 10gen.repo file I had in there the 32bit link specified, and installed the 32bit, which later I've deleted. Maybe something has been cached. Could someone help me out with this.

    Read the article

< Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >