Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 218/307 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Hardening non-root standalone Linux Tomcat install

    - by NoozNooz42
    I want to know if you have any tips as to how to strengthen the security of a non-root install of Tomcat in standalone mode once Tomcat is already installed in a non-root account, in standalone mode. I precise this because, for example, I'm not at all interested by the answers given here (because both Java and Tomcat requires root priviledges there to be installed and I've got zero interest in running jsvc): http://serverfault.com/questions/43765 So far, here's what I've done for my non-root standalone Tomcat 6 install: download and install the JRE .bin provided by Oracle/Sun (no need to be root here) (no need for a full JDK anymore right seen that Jasper [Tomcat's JSP engine] has its own compiler now right?) download and tar -xzf tomcat 6 (no need to be root here) set up transparent port-forwarding (must be root here) Note that my distribution is a Debian one and I have exactly zero interest in downloading Debian package / backports / whatever... Because, once again, I DO NOT want to need to be root to install Java & Tomcat. The only moment I needed to be root was to configure the firewall to transparently do the port forwarding 80 <-- 8080 and 443 <-- 8443. I then deleted all the default webapps but one: cd ~/apache-tomcat-6.0.26/webapps rm -rf docs rm -rf examples/ rm -rf manager/ rm -rf ROOT/ What about the directory ~/apache-tomcat-6.0.26/webapps/host-manager, do I need it or can I delete it? So, once I've installed Tomcat standalone in a non-root account (and taken into account that I don't want to enter the root password anymore and that I don't plan to install the whole Apache shebang), what more can I do? Are there connectors I can disable? (how?)

    Read the article

  • Join ActiveDirectory (Win 2k8R2) to OpenDirectory(Snow Leopard)

    - by Tom O'Connor
    The vast majority of questions and so on regarding the interoperability of Active and Open directories involves getting Mac clients to see an AD and auth against it. What we'd like to do is get a Windows 7 workstation to auth completely against Open Directory. We tried setting it up as an NT4 type PDC, and that doesn't work satisfactorily. We tried using pGina and the LDAP backend, which allows Authentication, but has no support for Authorization, and as a result, if we mount an NFS Share, the user has the rights to do anything they damn well please. Not ideal for security (Totally bloody unacceptable, actually). We tried using a Samba server (newer version than on the Open Directory Server) as an intermediate, so that it knows about the LDAP server on the OD Server, but uses Samba 4 instead of v3. That didn't work either. We could login, but couldn't mount, and if we did, we had the same rights as with pGina. If we right-click the mounted drive in Windows, and have a look at NFS UID, it returns -2, not the correct (mapped) UID. So the final plan I've got is to use an Active Directory, inside a Windows 2008R2 Virtual Machine. What I want to achieve is to have the Active Directory sync it's user data from OpenDirectory (read-only would be fine). That way, we'd have the ability to connect Windows 7 clients to a "virtual domain" which would actually just grab information from OD's LDAP. All the information I've found is about how to go the other way. Does anyone know how we can do this?

    Read the article

  • How to activate Win XP from Windows 7 compatibility mode on MacOS Parallels 5

    - by Ben Hammond
    I am running Parallels Desktop 5.0.9344 for Mac. I am running Mac OS 10.6.3 10D2094 I have bought a retail copy of Window 7 professional specifically because I need the XP compatibility. Windows 7 is installed and working. I have problems with the XP activation Windows7 'Virtual PC' does not run under Parallels (strange error about Server Execution failed 0x80080005). I have used the Parallels Transporter to convert the "Windows XP Mode Base.vhd" file into a parallels Virtual Machine. This copy of XP now starts normally, however it records itself as unregistered. There was a KEY.txt file in the same directory as the .vhd file; although this file contains a valid-looking activation key, it does not appear to activate the instance of XP. I have also tried to enter the Win7 activation key; this does not work either. I have tried calling the two phone numbers; an automated system asked me to enter 56 digits through the telephone and then accused me of being a pirate. I believe it may be possible to install Win7 via Bootcamp, start WinXP under Virtual PC, activate it and then import this activated .vhd into Parallels; but that seems a long way round, and is far from certain. What can I do to get WinXP running under Mac Parallels Desktop ?

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • New XPC: No video, no ethernet link, but drive spins

    - by Mike Pennington
    I bought a Shuttle XPC SH67H3 with integrated video. I installed: An Intel i5 2450P 16GB DDR3 RAM A SATA hard drive from my old linux server that still is bootable I have both power connectors plugged into the motherboard. I realize that the Intel i5-2450P doesn't have video capabilities; however, the drive spins like it's doing something useful. It seems like I should get an ethernet link light when I fire this up. I plan to run this headless anyway, so it would be really nice if I could figure out how to run this without a video card at all. I know the IP address and login info for the linux install on the disk. I plugged in speakers, but get no bios beeps when I power it up. Shuttle's bios manual has nothing in there that indicates I should have problems in this configuration. My questions: Is there a reason that the missing video card would block usage of the ethernet port? Are there settings on the motherboard / bios I can change to get this working?

    Read the article

  • VMware Workstation reboot 32-bit host when starting 64-bit guest

    - by Powerman
    I'm trying to start 64-bit guest (MacOSX and Windows7) on 32-bit host (Hardened Gentoo Linux, kernel 2.6.28-hardened-r9) using VMware Workstation (6.5.3.185404 and 7.0.1.227600). If VT-X disabled in BIOS, VMware refuse to start 64-bit guest (as expected). If VT-X enabled in BIOS, VMware start guest without complaining, but then, in about a second (I suppose as soon as guest try to switch on 64-bit) my host reboots (actually, it's more like reset - normal reboot procedure skipped and BIOS POST start immediately). My hardware is Core 2 Duo 6600 on ASUS P5B-Deluxe with latest stable BIOS 1101. I've power-cycled system, then enabled Vanderpool in BIOS. My CPU doesn't support Trusted Execution Technology, and there no way to disable it in BIOS. I've rebooted several times after that, sometimes with power-cycled, and ensure Vandertool is enabled in BIOS. I've also run VMware-guest64check-5.5.0-18463 tool, and it report "This host is capable of running a 64-bit guest operating system under this VMware product.". About a year ago I tried to disable hardened in kernel to ensure this isn't because of PaX/GrSecurity, but that doesn't help. I have not checked 32-bit guests with VT-X enabled yet, but without VT-X they works ok. ASUS provide "beta" BIOS updates, but according to their descriptions these updates doesn't fix this issue, so I'm not sure is it good idea to try it. My best guess now it's motherboard/BIOS bug. Any ideas?

    Read the article

  • Strange ASP.NET Queue Performance Counters Behavior?

    - by LemurTech
    We have an ASP.NET 2.0 site running in classic mode. I am seeing very strange behavior in the performance counter values. Perhaps these are bugs (I've been all over Google trying to verify this, without much luck), or perhaps it is just my inexperience with monitoring these things. This PerfMon graph (http://imgur.com/Jv5io5J) represents a load test where I add up to 350 virtual users to the site, at a rate of about 1/sec, performing relatively simple page browsing. At the end of the test, I gradually taper off the number of users. This is a 4 CPU server. Machine.config settings for are at the defaults. The solid blue line is ASP.NET Apps v2.x\Requests Executing for the application in question. The profile makes perfect sense, with a quick ramp-up to 32 executing requests (minWorkerThreads x 4CPUs), followed by a slower ramp-up to 48 ((maxWorkerThreads - minWorkerThreads) x 4CPUs). The solid yellow line is ASP.NET v2.x\Requests Queued. Again, this makes sense: after the initial 32 request threads are activated, the queue begins to build as new thread initialization can't keep pace with incoming requests. But as executing requests reaches its highest possible value of 48, the counter for ASP.NET Apps v2.x\Requests Queued (green solid line) suddenly springs to life and maintains step with the yellow counter. As far as I can tell, and with no other apps running on the server, these two counters should have had the same values from the start. One other odd thing: The counter for ASP.NET v2.x\Request Wait Time (dotted yellow line) also does not spring to life until executing requests reaches 48. Shouldn't I be seeing values here from the moment ASP.NET v2.x\Requests Queued begins to build? And likewise, why would ASP.NET Apps v2.x\Request Execution Time (dotted blue) increase significantly only after that peak of 48 is reached? Shouldn't it ramp-up gradually along with queued requests?

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by gft74
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Re-configure Office 2007 installation unattended: Advertised components --> Local

    - by abstrask
    On our Citrix farm, I just found out that some sub-components are "Installed on 1st Use" (Advertised), which does play well on terminal servers. Not only that, but you also get a rather non-descriptive error message, when a document tried to use a component, which is "Installed on 1st Use" (described on Plan to deploy Office 2010 in a Remote Desktop Services environment): Microsoft Office cannot run this add-in. An error occurred and this feature is no longer functioning correctly. Please contact your system administrator. I have ~50 Citrix servers where I need to change the installation state of all Advertised components to Local, so I created an XML file like this: <?xml version="1.0" encoding="utf-8"?> <Configuration Product="ProPlus"> <Display Level="none" CompletionNotice="no" SuppressModal="yes" AcceptEula="yes" /> <Logging Type="standard" Path="C:\InstallLogs" Template="MS Office 2007 Install on 1st Use(*).log" /> <Option Id="AccessWizards" State="Local" /> <Option Id="DeveloperWizards" State="Local" /> <Setting Id="Reboot" Value="NEVER" /> </Configuration> I run it with a command like this (using the appropriate paths): "[..]\setup.exe" /config ProPlus /config "[..]\Install1stUse-to-Forced.xml" According to the log file, the syntax appears to be accepted and the config file parsed: Parsing command line. Config XML file specified: [..]\Install1stUse-to-Forced.xml Modify requested for product: PROPLUS Parsing config.xml at: [..]\Install1stUse-to-Forced.xml Preferred product specified in config.xml to be: PROPLUS But the "Final Option Tree" still reads: Final Option Tree: AlwaysInstalled:local Gimme_OnDemandData:local ProductFiles:local VSCommonPIAHidden:local dummy_MSCOMCTL_PIA:local dummy_Office_PIA:local ACCESSFiles:local ... AccessWizards:advertised DeveloperWizards:advertised ... And the components remain "Advertised". Just to see if the installation state is overridden in another XML file, I ran: findstr /l /s /i "AccessWizards" *.xml Against both my installation source and "%ProgramFiles%\Common Files\Microsoft Shared\OFFICE12\Office Setup Controller", but just found DefaultState to be "Local". What am I doing wrong? Thanks!

    Read the article

  • Which database to use and system/db administration by layman [closed]

    - by blah
    So my friend and I got briliant ;) idea for a business. Since it is not predictable whether it will work out or not, we decided to keep cost as low as possible to start with, in particular not to hire anyone. If it will work out as expected it will generate enough profit to hire professionals in few months. But for the first few months we'll be doing everything by ourselfs. He's a business/finance major, and I'm a software developer, so obviously I have to take care of IT :) It will be a webapp, written in python/django. My questions regarding this project: 1) What database should I choose? I'm experienced with oracle, and have been working with SQL Server for a while, but both of them are too expensive(at least now). It's a developer experience, I've never done any dba stuff. I'm looking for something free(as in beer). Looks like MySql or PostgreSQL are most popular in this sector. I would appreciate any comments on which db to choose. I'm open to any suggestions(it doesn't have to be MySql or Postgre). Here's what I know about data: It will be almost dates and numbers, a little bit of text. Searched mainly by dates. Data will almost never be updated, mostly inserted and browsed. From 30k to 300k new records/month. 2) Servers. My idea is to rent two dedicated servers. During normal operation one would be a web server(debian/apache), other would be a db server(debian/?). My recovery plan is to install everything on both, and in case of trouble with one of machines just run everything on the other one. Does it even makes sense? Any other tips appreciated. Thanks.

    Read the article

  • Sporadic crash of master-slave MySQL replication process

    - by obarshay
    Hello, I was wondering if someone has experienced this and can perhaps provide some insight into this issue. We have a plan-vanilla MySQL master-slave replication set up. The tables are MyISAM and the master can get quite read/write active. We use the slave instance to perform full daily backups in order to avoid bringing down the master server. The backup process does the following: STOP SLAVE SQL_THREAD mysqlhotcopy all tables START SLAVE SQL_THREAD Every once in a while (once a month or so) the replication breaks with varying error messages indicating a corrupt query or log file. Here's one that happened last night: mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: server8.propreports.com Master_User: nexus8 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: bin.000045 Read_Master_Log_Pos: 581644327 Relay_Log_File: relay.000086 Relay_Log_Pos: 94131 Relay_Master_Log_File: bin.000045 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1064 Last_Error: Error 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '138070603'£' at line 1' on query. Default database: 'wtsdb'. Query: 'UPDATE fill SET clearing_fee='0.0E id='138070603'£' Skip_Counter: 0 Exec_Master_Log_Pos: 4164743 Relay_Log_Space: 577574251 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL I follow the following procedure to recover from above error and resume replication: stop slave; change master to MASTER_LOG_POS = 4164743, MASTER_LOG_FILE = 'bin.000045'; start slave; We have multiple servers set up this way and they all sporadically stop replicating with a similar error. Any advice on how to resolve this would be greatly appreciated.

    Read the article

  • What does a status of "Backup" mean for Windows 7 local user profiles?

    - by Howiecamp
    Summary: Upon logging on to Windows 7 RTM I get a message that my profile can't be loaded and a temporary user profile is created. I logged off and back on as Administrator. The user profiles dialog shows my user profile with a Type of "Local" and a Status of "Backup" rather than "Local" which it should be. How can I change this to make my user profile accessible? The long story: My PC has a single hard drive partitioned into a C: and a D:. I'd moved my user profile directory (c:\Users) to d:\Users, removed c:\Users and then used mklink.exe to create a directory symbolic link c:\Users -- d:\Users. Worked like a charm since I did it. Today, I make a System Restore Point for drives C: and D:. Next, I dismounted D: and used the Disk Management tool to remove the "D:" drive letter from the D volume. (My plan was to reboot and then redirect the symbolic link.) Upon reboot, I got the user profile error described above. Finally, I restored the System Restore Points that I'd created for both drives and then rebooted again. Same issue.

    Read the article

  • COMPAQ Tower No Signal to monitor

    - by Lancelot
    Hi there, I have had some troubles with this: I received a compaq tower: Compaq Presario SR1224NX Onboard VGA Windows XP SP2 from a friend. It was given to me on condition i give her the files back. My plan was to turn this into an Ubuntu Server. It booted up no problems even with the ubuntu live disc. After a normal shutdown(not unpluggin the power cord and not doing a hard shutdown with the power button), it would not restart even after SEVERAL attempts. I realized the light next to the power supply would flash very rapidly. I researched and found out it was one of two things: a dead power supply or the cables to the motherboard and to the disks etc. Thus i checked to ensure the cables were fine(and they were). I purchased a Power Supply (this one has 400 watts, the initial had 250) and installed it. The tower now boots. It booted into the live disk and everything. After a normal shutdown, it now restarts but is not sending signal to my monitor. I have tried several monitors in which i assure work perfectly but not with this tower(i recall it did show display after replacing the power supply). The monitors are ACER. This is different than most "no Signal" problems since i am not using an external Video Card, this is onboard VGA. Thanks!

    Read the article

  • Unable to connect to CopSSH when running Windows service, works when running sshd directly

    - by Joe Enos
    I've been using CopSSH (that uses OpenSSH and Cygwin, so I don't know which of the three is the problem) as my SSH server application at home on Windows 7 Ultimate 32 bit. I have used it for about a year with no real problems, other than it sometimes takes 2 or 3 connection attempts to get through, but it's always worked within a few attempts. A few days ago, it just stopped working. The Windows service is still running, and I've rebooted, restarted the service, etc. with no change. On the client (using Putty on Windows), I get the message "Software caused connection abort". On the server, my event viewer registers the following: fatal: Write failed: Socket operation on non-socket I finally got it working, but only by executing sshd.exe directly from the command line on the server. No special flags or options, just straight execution, and then when I connect remotely, it goes through. I do have firewall and anti-virus software which appears to be configured properly, but the fact that things work when running sshd.exe also indicates that the firewall is fine. I thought the service and executable did exactly the same thing, but apparently there's some difference. Does anyone have any ideas on where I should look for the problem? If I can't find something, I suppose I can write a Windows service or scheduled task that fires off sshd.exe directly and ensures that it stays running, but that's kind of a last resort, since it's just wrapping around something that should already work. I appreciate your help.

    Read the article

  • Advance DNS - Redirecting Emails to new webhost

    - by Martin
    I am not to sure if this question belongs here but I will surely find out soon enough. I have two web hosts (Not sure why it has been setup this way but it has). I do not want to use the original web host to handle the emails as the Data that we get from them is 500 mb which is already full with hosting the website. The second web host has an unlimited data plan and was created so we could use this host for the email accounts. Now the problem is I have reset the Advance DNS Zone records on both accounts and I am not sure what they were before. (Silly me should have taken a backup of how it was setup before hand I know) Emails were working before and going to the second hosts server now they are going to the first host but it has no email addresses setup for use so all emails are bouncing saying that the address does not exist. Host 1 IP: 192.185.96.110 Host 2 IP: 27.54.88.66 So far I have changed the Advanced DNS Zone record on Host 1 with the following: A Record: mail.australisinstitute.qld.edu.au - 27.54.88.66 I have not made any changes on Host 2 and both hosts have the default MX Records. If I need to provide any more information I can but I just hope someone can decipher what I have said haha. Cheers in advance!

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

  • Backup with Mercurial and Robocopy?

    - by Andrew Neely
    The Problem We would like to backup our critical files from several network shares to a removable hard drive. We want to automate the backup so we don't have to remember to run it. It needs to finish overnight. Furthermore, we want to be able to preserve multiple versions of each file so we can back out of our user's mistakes easier. Background Information I work in a large Windows-based enterprise with a centralized IT section who is responsible for all backups. Their backups are geared towards disaster recovery and not user error, and require upper-level management approval for any non-disaster recoveries. Several times we have noticed that our backups have failed, we weren't notified. I do not have administrative rights to the server or my desktop. We are trying to backup some 198,000 files spanning about 240 gigabytes. These files rarely change. Our backup drive is one terabyte. My Proposed Solution What I would like to do is to write a batch file using Robocopy with the /mir option along with Mercurial SCM to store all versions of the file. I would do an hg add followed by an hg commit before each execution of Robocopy to save the current state, and then make a mirrored copy of the file structures. The problem is the /mir will delete every folder not present in the source, and Mercurial stores the repository in a .hg folder in the destination folder. Does anybody know how I could either convince Mercurial to store the .hg folder elsewhere, or convince Robocopy not to delete it from the destination? I'm trying to avoid writing a custom program do to copying.

    Read the article

  • Password Cracking Windows Accounts

    - by Kevin
    At work we have laptops with encrypted harddrives. Most developers here (on occasion I have been guilty of it too) leave their laptops in hibernate mode when they take them home at night. Obviously, Windows (i.e. there is a program running in the background which does it for windows) must have a method to unencrypt the data on the drive, or it wouldn't be able to access it. That being said, I always thought that leaving a windows machine on in hibernate mode in a non-secure place (not at work on a lock) is a security threat, because someone could take the machine, leave it running, hack the windows accounts and use it to encrypt the data and steal the information. When I got to thinking about how I would go about breaking into the windows system without restarting it, I couldn't figure out if it was possible. I know it is possible to write a program to crack windows passwords once you have access to the appropriate file(s). But is it possible to execute a program from a locked Windows system that would do this? I don't know of a way to do it, but I am not a Windows expert. If so, is there a way to prevent it? I don't want to expose security vulnerabilities about how to do it, so I would ask that someone wouldn't post the necessary steps in details, but if someone could say something like "Yes, it's possible the USB drive allows arbitrary execution," that would be great! EDIT: The idea being with the encryption is that you can't reboot the system, because once you do, the disk encryption on the system requires a login before being able to start windows. With the machine being in hibernate, the system owner has already bypassed the encryption for the attacker, leaving windows as the only line of defense to protect the data.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • SharePoint Records Center Submitted E-mail Records not picked up

    - by Kenneth Verburg
    We have set up a new SharePoint 2007 site with a Records Repository. We're using Exchange 2007 Managed Folders to route e-mails to this repository based on the 'label' attached to the e-mail as set in the Exchange 2007 journaling options. E-mails added to a Managed Folder get sent to SharePoint, they end up in the "Submitted E-mail Records" list of the Records Repository. That's according to plan, but the e-mails are not routed to the respective document library as defined by the label. Instead an error appears in the event viewer for every e-mail listed in the Submitted E-mail Records list, on every interval of the records repository schedule (set to every two minutes for testing purposes): Value cannot be null, parameter name: g. Sending a document from the SharePoint site iself to the Records Repository via the Send To... link works fine, but e-mails get stuck in the list... We have set Document Libraries in the Respository with and without content types (with matching names with the Label and the Record Routing rule set). Any ideas what could be wrong? This is in the event log: Every two minutes the following error appears in the Application Log: Source: Office SharePoint Server Category: Records Center Type: Error Event ID: 4975 User: N/A Computer: SPS2007 Description: Value cannot be null. Parameter name: g For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >