Search Results

Search found 15099 results on 604 pages for 'runtime environment'.

Page 487/604 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • How to setup Python with Lighttpd and FastCGI (like PHP)

    - by johndir
    Running Lighttpd on Linux, I would like to be able to execute Python scripts just the way I execute PHP scripts. The goal is to be able to execute arbitrary script files stored in the WWW directory, e.g. http://www.example.com/*.py. I would not like to spawn a new Python instance (interpreter) for every request (like done in regular CGI, if I'm not mistaken), which is why I'm using FastCGI. Following Lighttpd's documentation, the following is the FastCGI part of my config file. The problem is that it always runs the /usr/local/bin/python-fcgi script for every *.py file, regardless of the content of that file: http://www.example.com/script.py [output=>] "python-fcgi: test" (regardless of the content of script.py) I'm not interested in using any framework, but simply executing individual [web] scripts. How can I make it act like PHP, executing any script in the WWW directory by requesting it's path? /etc/lighttpd/conf.d/fastcgi.conf: server.modules += ( "mod_fastcgi" ) index-file.names += ( "index.php" ) fastcgi.server = ( ".php" => ( "localhost" => ( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/run/lighttpd/php-fastcgi.sock", "max-procs" => 4, # default value "bin-environment" => ( "PHP_FCGI_CHILDREN" => "1", # default value ), "broken-scriptfilename" => "enable" ) ), ".py" => ( "python-fcgi" => ( "socket" => "/var/run/lighttpd/fastcgi.python.socket", "bin-path" => "/usr/local/bin/python-fcgi", "check-local" => "disable", "max-procs" => 1, ) ) ) /usr/local/bin/python-fcgi: #!/usr/bin/python2 def myapp(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['python-fcgi: test\n'] if __name__ == '__main__': from flup.server.fcgi import WSGIServer WSGIServer(myapp).run()

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • Setting up Windows network on Xen

    - by samyboy
    I'm trying to install a Windows XP server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny currently hosting around 10 Linux virtual servers. Windows tells me I have a "limited connection". It can't get any DHCP response, nor access other hosts in the network Here is the Xen's client config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '1024' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisplay=1 vncunused=1 usbdevice='tablet' Server config (/etc/xen/xend-config.sxp) (network-script network-bridge) (network-script network-dummy) (vif-script vif-bridge) (dom0-min-mem 512) (dom0-cpus 0) (vnc-listen '0.0.0.0') Since I use Debian I had to create a link like this: /etc/xen/qemu-ifup - /etc/xen/scripts/qemu-ifup What did I do wrong? Please tell me if you want some more info (logs, etc)

    Read the article

  • Experiences in Upgrading from Exchange 2003 to Exchange 2010

    - by gWaldo
    I'm currently running Exchange 2003 SP2 Cluster on a Server 2003 AD Forest (in native 2003 mode), and we beginning to plan the upgrade to Server 2008 AD and Exchange 2010. We have two main sites, one middle-sized office, and a couple of smaller sites which have DCs (which may be RODCs after the upgrade). Currently all of our Exchange cluster is in my main site, but we are considering using the new datastore paradigm for load-balance/failover at the other large site, but this is not set in stone. Right now we are in the information-gathering and planning phases. I am looking for input of any gotchas experienced while performing either upgrade, but especially the Exchange upgrade. Gotchas? What surprised you? What wasn't documented? What said one thing but was misleading? (Confusing either in content or severity.) What is great or horrible about the new system? What worked well? What worked poorly? If you were to do it over again...? (I know that this isn't so much a question that can be definitively answered, but I'm happy to reward insight and useful resources (not the Microsoft documentation, but Blogposts are welcome) with upvotes.) UPDATE A couple items of note: -We are not currently using OWA (currently only the admins), but it may become more of a consideration with iOS devices. -We do have a small number of Blackberries in the environment (< 10%). -In addition to the standard Exchange connectors, we have a third-party connector for Captaris RightFax integration.

    Read the article

  • Facing error: "Could not open a connection to your authentication agent."; trying to add ssh-key.

    - by Kaustubh P
    I use ubuntu server 10.04. ssh-add /foo/cert.pem gave the following output Could not open a connection to your authentication agent. These are my running processes: ps -aux | grep ssh Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html root 1523 0.0 0.0 49260 632 ? Ss Dec25 0:00 /usr/sbin/sshd root 10023 0.0 0.3 141304 6012 ? Ss 12:58 0:00 sshd: padmin [priv] padmin 10117 0.0 0.1 141304 2400 ? S 12:58 0:00 sshd: padmin@pts/1 padmin 11867 0.0 0.0 7628 964 pts/1 S+ 13:06 0:00 grep --color=auto ssh root 31041 0.0 0.3 141264 5884 ? Ss 11:24 0:00 sshd: padmin [priv] padmin 31138 0.0 0.1 141264 2312 ? S 11:25 0:00 sshd: padmin@pts/0 root 31382 0.0 0.3 139240 5844 ? Ss 11:26 0:00 sshd: padmin [priv] padmin 31475 0.0 0.1 139372 2488 ? S 11:27 0:00 sshd: padmin@notty padmin 31476 0.0 0.0 12468 964 ? Ss 11:27 0:00 /usr/lib/openssh/sftp-server These are my environment variables: $ env | grep SSH SSH_CLIENT=192.168.1.13 42626 22 SSH_TTY=/dev/pts/1 SSH_CONNECTION=192.168.1.13 42626 192.168.1.2 22 What is wrong? Why cant I add any identities? Thanks.

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • Disable the user of Internet explorer through policies when called from HTML help

    - by Stephane
    Hello, I have a locked down environment where users are prohibited from doing, well, basically anything but run the specific programs we specify. We just switched a program from using the venerable "WinHELP" help format to HTML help (CHM) but that seem to have an unwanted and rather dangerous side effect: when a user click on a hyperlink inside the HTML help, a new internet explorer window is opened and the user is free to browse and do terrible things to my server (well, not that much, but still...) I have checked the session in this case and the IE window is actually hosted within the help engine: there is no iexplore.exe process running in the user session (and it cannot: it's explicitly prohibited). We have disable all help right now until we find a solution. I'm working with the help team to have all external URLs removed from the help file but that is going to be a long and error-prone task. Meanwhile, I've checked all the group policies option but I have to say that I was unable to find anything that would prevent a standalone IE window hosted in a random process from running. I don't want to disable WinHTTP or the IE rendering engine or anything of the sort. But I need to prevent all users members of a specific AD user group from ever having an IE window displayed to them. The servers are running Windows 2003 and Citrix metaframe 4.5. Thanks in advance

    Read the article

  • Why does MOSS sometimes delete an existing user from a site?

    - by Jesse
    I'm experiencing an issue with a MOSS installation. I am using the Site Settings Permissions to add an Active Directory account as a valid user of a site. This entails validating that the user account name is correct via the 'Check Names' button, then giving them 'Contribute' permissions. Once this is done they appear as a user on the 'All People' page. This works fine and the user is able to access the site. At some point in the future (sometimes several days later) the user account is somehow removed as a valid user from the site. This site resides in a test environment so access is pretty well controlled; which has allowed us to rule out someone else going in and removing the user manually. This appears to be something that is being done by the system itself and we have no idea why. We can manually add the user back, but then it will eventually get removed again later. I have an admittedly limited understanding of SharePoint permissions, but I believe that SharePoint stores valid users in a SQL database and I would assume that when dealing with Active Directory accounts it would be storing the user name and probably the SID. It appears that for some reason this record is later getting deleted out of the database, as the users will suddenly disappear from the "All People" page and will start getting "Access Denied: You are not authorized..." messages when trying to access the site. Has anyone seen this behavior before?

    Read the article

  • Strange behaviour when creating/deleting subdomains

    - by Saif Bechan
    This can be a DNS cache issue from my local machine, but I am not sure. This is what happens. I have a domain that does not use wildcard subdomains, so they have to be created. Without creating the domain, and I point my browser to test.domain.com, I get a page server not found. Now when I create the subdomain, I keep getting the same problem. Now when I first create the domain, without ever visiting the page, I get the normal page, but now when I delete the subdomain, it never goes away. Can this be a DNS cache issue, I am working on a shared environment, maybe the router has a cache but I doubt that. Can this have something to do with my setup. I have tried to use the Google DNS hosting, but this gives me the same results. I have also tried some tools that clear my local DNS cache, they were some add-ons for FireFox. Anyone have any ideas what can be the problem. Are there any tests I can do to see if there is some kind of cache between me and the server.

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • SSH client not showing prompt after successful login

    - by user431949
    I'm having problems with my SSH client on Ubuntu 10.10. When I switch on my computer and open a Terminal and execute the command ssh user@host, it gives me a password prompt after which I enter the right password, I then get a prompt to execute my commands on the remote computer. Now the problem is, after a little while (probably around 10 minutes), the terminal window stops accepting commands (No matter what I type, nothing shows). Once this happens, I close the Terminal window and try to start all over again by opening another Terminal window. But this time around, after entering the right password, I don't get a welcome message or prompt. The cursor just keeps blinking on a new line. I ran the ssh command with -v parameter and the message I get after a successful login is: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.utf8 Still the cursor keeps blinking on a new line without a prompt. However, Putty SSH client works perfectly on the same machine. Thank you very much for your time. Your help would be greating appreciated.

    Read the article

  • bind9 named.conf zones size limit

    - by mox601
    I am trying to set up a test environment on my local machine, and I am trying to start a DNS daemon that loads tha configuration from a named.conf.custom file. As long as the size of that file is like 3-4 zones, the bind9 daemon loads fine, but when i enter the config file i need (like 10000 lines long), bind can't startup and in the syslog i find this message: starting BIND 9.7.0-P1 -u bind Jun 14 17:06:06 cibionte-pc named[9785]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jun 14 17:06:06 cibionte-pc named[9785]: adjusted limit on open files from 1024 to 1048576 Jun 14 17:06:06 cibionte-pc named[9785]: found 1 CPU, using 1 worker thread Jun 14 17:06:06 cibionte-pc named[9785]: using up to 4096 sockets Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration from '/etc/bind/named.conf' Jun 14 17:06:06 cibionte-pc named[9785]: /etc/bind/named.conf.saferinternet:1: unknown option 'zone' Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration: failure Jun 14 17:06:06 cibionte-pc named[9785]: exiting (due to fatal error) Are there any limits on the file size bind9 is allowed to load?

    Read the article

  • Windows 7, network shares, and authentication via local group instead of local user

    - by Donovan
    I have been doing some troubleshooting of my home network lately and have come to an odd conclusion that I was hoping to get some clarification on. I'm used to managing share permissions in a domain environment via groups instead of individual user accounts. I have a box at home running windows 7 ultimate and I decided to share some directories on that machine. I set it up to disallow guest access and require specifically granted permissions. (password moe?). Anyway, after a whole bunch of time i figured out that even though the shares I created were allowed via a local group i could not access them until i gave specific allowance to the intended user. I just didn't think i would have to do that. So here is the breakdown. Network is windows workgroup, not homegroup or nt domain PC_1 - win 7 ultimate - sharing in classic mode - user BOB - groups Admins PC_2 - win 7 starter - client - user BOB - groups admins PC_3 - win xp pro - client - user BOB - groups admins the share on PC_1 granted permission to only the local group administrators. local user BOB on PC_1 was a member of administrators. Both PC_2 and PC_3 could not browse the intended share on PC_1 because they were denied access. Also, no challenge was presented. They were simply denied. After adding BOB specifically to the intended share everything works just fine. Remember, its not an nt domain just a workgroup. But still, shouldn't i be able to manage share permissions via groups instead of individual user accounts? D.

    Read the article

  • How do I correctly SSH port forward using LiveReload on Redhat?

    - by program247365
    Referencing this page: http://feedback.livereload.com/knowledgebase/articles/86280-if-you-edit-files-directly-on-your-server It says you can remotely port forward the LiveReload specific port of 35729, using this command: ssh -L 35729:127.0.0.1:35729 mylogin@myremoteserverIP When I run the -v option, I get: debug1: Local connections to LOCALHOST:35729 forwarded to remote address 127.0.0.1:35729 debug1: Local forwarding listening on ::1 port 35729. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 35729. debug1: channel 1: new [port listener] debug1: channel 2: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: client_input_channel_req: channel 2 rtype [email protected] reply 1 debug1: Connection to port 35729 forwarding to 127.0.0.1 port 35729 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 35729 for 127.0.0.1 port 35729, connect from 127.0.0.1 port 63673, nchannels 4 I thought editing my /etc/services with this line, would work, but it doesn't: livereload 35729/tcp # livereload usage with guard-livereload Every time I attempt to connect with the browser extension, I believe It's getting blocked by my server. What am I missing here? Do I need to edit /etc/services for this to work?

    Read the article

  • Cache-control for permanent 301 redirects nginx

    - by gansbrest
    I was wondering if there is a way to control lifetime of the redirects in Nginx? We would liek to cache 301 redirects in CDN for specific amount of time, let say 20 minutes and the CDN is controlled by the standard caching headers. By default there is no Cache-control or Expires directives with the Nginx redirect. That could cause the redirect to be cached for a really long time. By having specific redirect lifetime the system could have a chance to correct itself, knowing that even "permanent" redirect change from time to time.. The other thing is that those redirects are included from the Server block, which according the nginx specification should be evaluated before locations. I tried to add add_header Cache-Control "max-age=1200, public"; to the bottom of the redirects file, but the problem is that Cache-control gets added twice - first comes let say from the backend script and the other one added by the add_header directive.. In Apache there is the environment variable trick to control headers for rewrites: RewriteRule /taxonomy/term/(\d+)/feed /taxonomy/term/$1 [R=301,E=expire:1] Header always set Cache-Control "store, max-age=1200" env=expire But I'm not sure how to accomplish this in Nginx.

    Read the article

  • open_basedir problems with APC and Symfony2

    - by Stephen Orr
    I'm currently setting up a shared staging environment for one of our applications, written in PHP5.3 and using the Symfony2 framework. If I only host a single instance of the application per server, everything works as it should. However, if I then deploy additional instances of the application (which may or may not share the exact same code, dependent on client customisations), I get errors like this: [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Warning: require(/var/www/vhosts/application1/httpdocs/vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php): failed to open stream: Operation not permitted in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '/var/www/vhosts/application1/httpdocs/app/../vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 Basically, the second site is trying to require the files from the first site, but due to open_basedir restrictions it can't do that. I'm not willing to disable open_basedir as that is only masking the problem instead of solving it, and creates a dependency between applications that should not be present. I initially believed this was related to a Symfony2 error, but I've now tracked it down to an issue with APC; disabling APC also solves the error, but I'm concerned about the performance impact of doing so. Does anyone have any suggestions on what I might be able to do?

    Read the article

  • Home Server: cpu virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on VM (or compute instance) management and what would best suit my needs. (I have another question related to the storage management). My use cases are: A backup server: rsync and other services running. A personal cloud server: a kind of owned dropbox system, à la ownCloud. " users foreseen. A media server: streaming videos and displaying photos. Here my environement and wishes: Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine 2-3 VMs foreseen: backup server, owncloud server and media server (optional). Those are only servers, so no graphical console needed (I don't need VirtualBox) By VM I mean a virtualised environment like KVM, Xen, etc. or a compute instance like with OpenStack storage should be "virtualised/cloudified" see my other question. VM should be able to be migrated to another server in the future if performance cannot be fullfilled anymore by the current server It does not matter if installation of such setup is complicated as long as management tools allow for easy maintenance I don't have Windows at home, so solution should be Linux friendly and would be nice to be web based. But native apps are OK too. System should be easy to enhance: by adding a new server to migate some of the VMs to it. So it's really a kind of private cloud on which I could run some Linux OS. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. So Xen, KVM, VitualBox or OpenStack? What would you recommend?

    Read the article

  • What are the biggest, best CPUs that support Physical Address Extension?

    - by Giffyguy
    I'm looking for a CPU that will support PAE and fit into an LGA775 socket. This combination of technology is very much preferred for my current server hardware/software setup. My priorities in order of highest to lowest: PAE & LGA775 At least 1066Mhz FSB Largest CPU cache possible Multiple Cores if possible HyperThreading if possible Most other factors are of little-to-no consequence. I'm finding it very difficult to figure out what my options are. Intel doesn't have much useful information on PAE (since x64 is so dominant), and Wikipedia simply says that "PAE is provided by Intel Pentium Pro (and above) CPUs - including all later Pentium-series processors except the 400 MHz bus versions of the Pentium M." All of Intel's listed Pentium CPU's support Intel64, which makes me seriously doubt they will support PAE with a 32-bit OS. And Wikipedia's claim is so vague, I have no idea if they mean up-to-and-including the x64 Prescott CPUs. PAE is supposed to be an aspect of the x86 architecture, and I believe it is no longer supported in an x64 environment. Please correct me if I am wrong.

    Read the article

  • How Can I Make Apache Stop Serving ALL Unknown File Types (like .php~)?

    - by user223304
    I am coming from IIS and moving to Apache and recently found out that Apache by default serves up files of an unknown file extension as PURE TEXT. This can be an issue if a user uses certain programs that back up .php files as .php~. Then the .php~ file becomes completely readable by simply navigating to it in a browser. To make matters worse these .php~ files are often considered 'hidden' in the linux environment from the user so some may not even know they exist. Bots have been created around this fact that scour the internet looking for popular file name backups and extracting potentially secure info from them. I already know how to stop serving up .php~ files or any specific file extensions. I also know not to use any editors that would save backup files like this. My question is, how can I stop this default Apache behavior of serving up ANY non-MIME file type at all? I just don't like the this behavior and would like to stop it. I don't want it serving up .aspx~, .html~, .bob, .carl, no extension or anything else that is not a real MIME type. I know that I can probably go and use a directive to first Deny access to all file types. Then add the ones I want to serve out one by one. But I'm wondering if there's an easier/quicker way. Thanks for any help.

    Read the article

  • Sun Grid Engine (SGE) / limiting simultaneous array job sub-tasks

    - by wfaulk
    I am installing a Sun Grid Engine environment and I have a scheduler limit that I can't quite figure out how to implement. My users will create array jobs that have hundreds of sub-tasks. I would like to be able to limit those jobs to only running a set number of tasks at the same time, independent of other jobs. Like I might have one array job that I want to run 20 tasks at a time, and another I want to run 50 tasks at a time, and yet another that I'm fine running without limit. It seems like this ought to be doable, but I can't figure it out. There's a max_aj_instances configuration option, but that appears to apply globally to all array jobs. I can't see any way to use consumable resources, as I'd need a "complex attribute" that is per-job, and that feature doesn't seem to exist. It didn't look like resource quotas would work, but now I'm not so sure of that. It says "A resource quota set defines a maximum resource quota for a particular job request", but it's unclear if an array job's sub-tasks' resource requests will be aggregated for the purposes of the resource quota. I'm going to play with this, but hopefully someone already knows outright.

    Read the article

  • Microsoft Application Request Routing with Windows Authentication

    - by theplatz
    I'm running into a problem trying to get Windows Authentication working in an environment that uses Microsoft Application Request Routing and was hoping someone might be able to help. The problem I'm running into is that only some requests are authenticated, while others fail with 401 errors. I have followed the Special Case of Running IIS 7.0 in a Web Farm instructions found at http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx to no avail. My current server setup looks like the following: ARR Two servers set up with IIS shared configuration using IIS 7.5 on Windows 2008 R2 Anonymous authentication turned on for the Default Web Site Web Farm Two servers running IIS 7.5 on Windows 2008 R2 Three web sites set up using port binding to differentiate between virtual hosts. Ports being used are 8000, 8001, and 8002 Application pools for Windows Authentication all use a common domain account SPN added to domain account for http/<virthalhost-name>:<port-number> and http/<virtualhost-name>.<fully-qualified-domain>:<port-number> The IIS logs show the following when authentication is working/failing. If I understand correctly, all requests should show DOMAIN\User_Name: 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/stylesheets/techweb.landing.css - 8002 DOMAIN\User_Name ARR-HOST-1-IP-ADDRESS 200 0 0 62 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-left.gif - 8002 DOMAIN\User_Name ARR-HOST-IP-ADDRESS 200 0 0 31 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/application-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 3221225581 15 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/building.gif - 8002 DOMAIN\User_Name ARR-HOST-2-IP-ADDRESS 200 0 0 218 Does anyone know what might cause this problem and how I can resolve it?

    Read the article

  • Network Load Balancing, intermittent port problem on Windows Server 2008

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem on a Windows Server 2008 NLB. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • how to edit source files and commit the changes to the new website?

    - by ajsie
    i've got ubuntu installed with lamp. im using webdav to upload/download files to/from the ubuntu web server, after i have edited the php source files in netbeans. however, i wonder what is best practice for editing source files and committing these changes to the new website. cause if we are 2-3 developers, i guess we have to use svn. but i have never used it before so i wonder how it works. should i install it and then select the /var/www (apaches webroot) as the repository folder? then when i check in, all the changes will apply immediately? could someone please explain following steps: how to download, edit the source files, upload the files and see the new changes in the website. cause i have only worked with a local apache before, and it was only me. now there will be some more programmers so i have to set up a decent, central environment for this, and have to know how netbeans, svn, webdav and apache works all together. thanks!

    Read the article

  • Virtual Machine Network Architecture, Isolating Public and Private Networks

    - by Mark
    I'm looking for some insight into best practices for network traffic isolation within a virtual environment, specifically under VMWARE ESXi. Currently I have (in testing) 1 hardware server running ESXi but i expect to expand this to multiple pieces of hardware. The current setup is as follows: 1 pfsense VM, this VM accepts all outside (WAN/internet) traffic and performs firewall/port forwarding/NAT functionality. I have multiple public IP addresses sent to the this VM that are used for access to individual servers (via per incoming IP port forwarding rules). This VM is attached to the private (virtual) network that all other VMs are on. It also manages a VPN link into the private network with some access restrictions. This isn't the perimeter firewall but rather the firewall for this virtual pool only. I have 3 VMs that communicate with each other, as well as have some public access requirements: 1 LAMP server running an eCommerce site, public internet accessible 1 accounting server, access via windows server 2008 RDS services for remote access by users 1 inventory/warehouse management server, VPN to client terminals in warehouses These servers constantly talk with each other for data synchronization. Currently all the servers are on the same subnet/virtual network and connected to the internet through the pfsense VM. The pfsense firewall uses port forwarding and NAT to allow outside access to the servers for services and for server access to the internet. My main question is this: Is there a security benefit to adding a second virtual network adapter to each server and controlling traffic such that all server to server communication is on one separate virtual network, while any access to the outside world is routed through the other network adapter, through the firewall, and on the the internet. This is the type of architecture i would use if these were all physical servers, but i'm unsure if the networks being virtual changes the way i should approach locking down this system. Thank you for any thoughts or direction to any appropriate literature.

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >