Search Results

Search found 918 results on 37 pages for 'derek van cuyk'.

Page 12/37 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • VNC unattended Server (No user Interaction)

    - by Louis van Tonder
    I worked on a proof on concept a while ago.... whereby I managed to get VNC going in full "unattended" mode... I.e. The VNC Server dials into the viewer... which is running in Listening mode. The same concept of how single click works, but without the user interaction. I cant seem to locate my source files for this concept I worked on... although I have found my shortcut that worked on the Viewer side to listen. "C:\Program Files\UltraVNC\vncviewer.exe" -listen 5007 /noauto /256colors I can not however remember/locate my demo of what the server is doing.... how to configure it. If I remember correctly, the server was also started with command line params that "dialed" into a remote IP/port, that the viewer is listening on. Any ideas? Thanks

    Read the article

  • sbs-server with 2 nics and 2 connections to the internet with different providers not working as it

    - by erik-van-gorp
    We have the following configuration : A sbs-2003 server in a domain (mydomain.com) with 2 network cards, each connected to a different network (provider), with different gateways, one for web and one for mail and clients. (we do this because the bandwitdh we get from our providers is too small to handle all the mail(+spam) traffic and webservices, so we took 2 providers) DNS is as follows : www.mydomain.com 1.2.3.4 mail.mydomain.com 5.6.7.8 NIC 1(192.168.1.3) is connected to to the internet through a firewall at 192.168.1.1, having wan address 1.2.3.4 NIC 2(10.0.0.3) is connected to to the internet through a firewall at 10.0.0.1, having wan address 5.6.7.8 Both nics have their default gateway installed at their corresponding routers. Also the metrics are set equal. (i know this isn't a supported config, but it works more or less). In this configuration i can use RDP on both wan adresses, and telnet to port 25 works as well on both. The issue now is that since a few weeks , we get regular disconnections, and website hickups(timeouts), several per hour. If we set one router to a higher metric, that route no longer works. In short, I want the mails to route through NIC2 and the web through NIC1. Any better configuration (without installing a second mail server) ?

    Read the article

  • Multiple Rails apps on same subdomain?

    - by Derek
    I recently decided to try out Rails. When working with PHP, I simply had all of my PHP projects in the same directory. For example, I may have http://ubuntu/app1, http://ubuntu/app2, etc. I created a subdomain for Rails (http://ruby.ubuntu), installed Rails and Passenger and everything is working. However, I may be wrong, but it looks like I can only have one Rails app per subdomain? My VirtualHost is as follows: <VirtualHost *:80> ServerName ruby.ubuntu ServerAdmin webmaster@localhost DocumentRoot /var/www/ruby/blog/public <Directory /var/www/ruby/blog/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all RailsEnv development </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All of my PHP and misc. files are stored in /var/www/main. I want to be able to store all of my Rails apps in /var/www/ruby. I tried changing DocumentRoot to /var/www/ruby, but I don't think it's as simple as that. When I browse to a Rails app's Welcome Aboard page and click on "About my application's environment," I get a 404 page, but when the DocumentRoot is set to the public directory, I get the expected result. I don't want to have to create a new subdomain every time I create a new project. Is there any way I can make it so I can store all of my apps in /var/www/ruby, and browsing to http://ruby.ubuntu will let me access all of my Rails apps there? That way if I want to create a new app, all I have to do is rails new app, no Apache .htaccess or VirtualHost configuration required.

    Read the article

  • Adobe software does not save to network share

    - by Bart van Heukelom
    I'm running Windows 7 inside virtualbox on a linux host. I have shared my linux filesystem so it's accesible in Windows under \vboxsvr\sharename. I've mounted this share on S:. For most software, it works fine. Adobe software like Photoshop has problems with it though. I can read from S: just fine, but if I try to save something it gives me the message "There are no more files". How can I make it able to write to the share?

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • how to figure out why sites on my server aren't loading

    - by Derek
    I seem to randomly receive "page cannot load, cannot connect to server" errors for sites on one of my servers. when this happens, it seems to only happen on certain IPs or IP ranges at a time. I say this because while I'll get the error from my home laptop I'll be able to access the site fine from my work computer or from an offsite VPS. DNS records should already be fully propagated as these records were updated months ago. I have no idea how to diagnose what's going on. Is there a tool in cpanel or outside on the web that can help me figure out what's going on?

    Read the article

  • Force ID of user created by apt-get

    - by Bart van Heukelom
    Context: I'm automatically installing postgresql-9.1 on an Ubuntu server with apt-get. This creates the required postgres user. The Postgres data is on an external volume that survives reinstalls. This data is obviously owned by the postgres user. The problem I'm having is that the ownership is not recorded under the name postgres, but under the UID that postgres had at creation time. When the server is reinstalled, postgres sometimes gets a different UID, and no longer owns the data directory, and thus does not work. Question: Can I force the UID of the user postgres created by apt-get to something fixed? Or is there another way to solve my problem? (As you may have deduced, this is on Amazon EC2 with the data on an EBS volume)

    Read the article

  • Create network shares via command line with specific permissions

    - by Derek
    This is sort of a two-pronged question. I am developing an application that will need to be able to create network shares in Windows Server 2003 via the command line. So, firstly, how do I create shares in Windows via the command line? I tried researching it, and all I was able to find is that I should be using net, but other than that, there isn't much documentation. Also, in this share there will be a few directories with the names of users on the domain, and I would like for the directories to not be readable or writable by anyone else. For example, say I have two directories: jsmith and jdoe. I would like the user jsmith to write and read from the directory jsmith, but not the directory called jdoe, and vice versa.

    Read the article

  • PHP processes run one at a time, always taking 100% of one core

    - by Derek Kurth
    We have seven websites written in PHP running on a Windows 2008 server with IIS 7.5. They are all very slow right now. When I look in Task Manager, I see around 10 php-cgi.exe processes, and they are all taking 0% of the CPU, except one, which is taking 25%. It's a quad-core server, so it's taking 100% of one core. If I watch for a few seconds, the process taking 25% will go to 0%, and a different php-cgi.exe process will jump to 25%. So all the php-cgi.exe processes are just lined up, waiting on a single core, and each process uses 100% of the processor when it can. Each of the 7 sites is in its own application pool in IIS, and we're using FastCGI. The PHP version is 5.3. Any ideas? Thanks!

    Read the article

  • How can I get Windows to apply its settings?

    - by Jouke van der Maas
    I have a computer with a major problem; it gives a blue screen when the login screen loads. I've been using this guide to troubleshoot the issue, and now I've run into a problem. I have determined the issue is not bad memory or a bad hard drive. According to the guide, this means the problem is in the OS. I've tried to follow the steps, but Windows (Vista SP1) somehow doesn't remember any changes. On every reboot, the computer is in exactly the same state it was in before. Any changes to system settings or files won't be recorded. As this means I can't check what is causing the problem, I can't fix my PC. Is there a way to find out what's causing this? Is it just a mode Windows goes into to protect itself, or is it some other problem? Anything to help troubleshoot will be of great help here. PS. I'm kind of new to this site. If I messed up, please tell me in the comments.

    Read the article

  • Determine from where is "sh" being run under apache www-data user using using PF or NETSTAT

    - by Eugene van der Merwe
    I am working with a compromised Ubuntu 8.04 Plesk 9.5.4 server. It seems that a script on the server is continuously doing reverse lookups to random IPs on the Internet. I first spotted it during by using top and then noticed flashes of this coming up continuously: sh -c host -W 1 '198.204.241.10' I wrote a this script to interrogate ps every 1 second to see how frequently this script happens: #!/bin/bash while : do ps -ef | egrep -i "sh -c host" sleep 1 done The results are that this script runs often, every few seconds: www-data 17762 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17772 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' root 18031 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18078 16704 0 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 18125 17996 0 10:07 ? 00:00:00 sh -c host -W 1 '91.124.51.65' root 18131 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18137 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 18137 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' My theory is if I can see who is launching the sh process or form where it's launched I can isolate the problem further. Can somebody please guide me using netstat or ps to identify from where sh is being run? I might get many suggestions that the OS is out of date and so the Plesk, but please bear in mind there are some very concrete reasons why this server is running legacy software. My question is aimed at a advanced Linux systems administrators who have in depth experience with security compromises and using netstat and ps to get to the bottom of it.

    Read the article

  • Delete registry key or value via a CMD script?

    - by Derek
    How do I edit an already-in-production .cmd script file, in order to have the script delete a certain registry key in the Windows registry? Firstly, is this even possible, and secondly (if that's not possible), could I create a .reg file and execute that file from with the .cmd file? From within the .cmd script, it is not working: del "[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\CurrentVersion\SampleKey]" This method hasn't worked for me either: cmd "\\networkdrive\regfiles\deleteSampleKey.reg" Then from within the .reg file: Windows Registry Editor Version 5.00 [ -HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon ]

    Read the article

  • Multiple subnets behind SonicWall TZ 180

    - by Derek
    We have a SonicWall TZ180 that acts as a VPN endpoint. Right now it has one WAN IP address and a /24 assigned to the LAN interface. Our mail cluster administrator asked if it was possible to add a second private class C behind the VPN. This second subnet would be available to the other network and then we would use address objects and acls to limit access. Is this possible? I read up on PortShield but I don't know if that's what we would need to use because we're pushing all data out of one physical port into a Cisco switch that has VLANs already set up. Addendum: It appears that PortShields will do what I want with only one limitation; it requires a direct 1-1 relationship of portshield to physical port. This would then limit us to 4 PortShields on 1 TZ180. Is there a better solution than this?

    Read the article

  • Possible to redirect from HTTPS to HTTP behind load-balancer?

    - by Derek Hunziker
    I have a basic ASP.NET application that sits behind an F5 load-balancer. Incoming SSL requests (over HTTPS) terminate at the load-balancer and all internal communication between the load-balancer and my application servers is unsecure (over HTTP). When a unsecure request comes in, my app is able to use Response.Redirect("https://...") to redirect a secure URL with no problems. However, the other direction appears to be impossible - I cannot redirect from HTTPS to HTTP using Response.Redirect() from my application. The URL remains HTTPS for the client and does not change. Could the F5 be preventing the redirect for ever reaching the client? Is there any special configuration necessary to let this happen?

    Read the article

  • How can use mod_rewrite to redirect a multiple specific URLs containing multiple query strings?

    - by Derek
    Hi there folks, we recently migrated a site from a custom CMS to drupal. In an effort to preserve some links that our users bookmarked (we have about 120 redirects) we would like to forward the original URLs to a new URL. I have been searching for a couple days, but can't seem to find anything simple to what I need. We have existing URLS that contain one or more query strings, for example: /article.php?issue_id=12&article_id=275 and we would like to forward to the new location: http://foobar.edu/content/super-happy-fun-article I started using: RewriteEngine On RewriteRule ^/article\.php?issue_id=12&article_id=275$ http://foobar.edu/content/super-happy-fun-article [R=301,L] This, however, does not work. A simple RewriteRule works: RewriteRule ^test\.php$ index.php It is unclear to me how I need to use {QUERY_STRING} with multiple Basically it's 120 simple redirects that go from one existing URL to a new one. I don't need ranges [0-9], because there is no sequential order to existing URLs. Perhaps I can do what I need with RewriteMap and a simple text file that contains a line like this: index.php?issue_id=12&articleType_section=0&articleType_id=65 http://foobar.edu/category/fall-2008 If anyone has any idea on using mod_rewrite to accomplish this or if there is a better, or more simple mod, I am open to that as well. Thanks!

    Read the article

  • Clean out a large MediaWiki text table

    - by Bart van Heukelom
    I just discovered that an old MediaWiki of mine was infested with spam, and the database table named "text" (which contains the page content) is 3GB large. I've deleted all the spam pages manually, but: The table is still the same size. I wonder how it got to 3GB anyway. There wasn't that much spam (about a hundred medium-sized pages) How can I get rid of this mess? If you want to inspect the wiki, it's over here. The database is MySQL 5.0.75.

    Read the article

  • Wordpress on Apache is redirecting all https to http

    - by Krist van Besien
    I have a problem with a wordpress site on a server I admin. I don't know anything about wordpress however. My problem is that we want the site to be accessed over https, bot somehow all requests to https:// URLs are answered by the server with a 302, redirecting to http. The wordpress site itself is configured to use https, and we see that in the pages that are generated the links are all https links. In the apache config there are no rewrite rules and no redirects. However, any request to a https:// URL is answered with a redirect to the equivalent http URL. And I really would like to know where these redirects are coming from, what is generating these redirects. I've increased the loglevel on the webserver to DEBUG, but did not get any info there. I tried to enable debug logging in wordpress per the recipy I found here: http://codex.wordpress.org/Debugging_in_WordPress But did not get a debug.log file in the directory where one should appear. I'm really at a loss here, and need to fix this urgently. Any hints as where to start looking? Apache is 2.2.14 on Ubuntu. There are several other virtual hosts on this server, using php and https without any problem... Edit: I created a small info.php script and dropped that in the webservers' root. Calling this yields the output of the script, no redirect is generated. This suggest that it's not the webserver, but wordpress that is doing it. A second thing I noticed is that the redirect comes with several cookies, one of which has "httponly" set. Could that be it?

    Read the article

  • Change User's password in Subversion

    - by Derek
    We use Collabnet Subversion here in our office. We can change user passwords via console (remote access to the server), but the console is only accessible by a root password. Is there an existing web interface which users can use to change their passwords?

    Read the article

  • If I double my ram on a x86 processor, does that double the ram I can use for each individual process?

    - by Derek Reitz
    I don't understand how 32-Bit OS's use RAM on a per process basis. I've read the max RAM my x86 processor running a 32-bit OS can use is 2^32 = 4gb; but that's just for one process, right? 3DS Max keeps crashing, but it typically can never use more than 2GB of RAM before it crashes, if I increase my RAM from 4-8GB, would that double how much RAM I can use for each individual process or actually cause no change in my performance? Also would increasing my VRAM and getting a better graphics card increase the extent to which individual programs can preform? Lastly, is there any way to upgrade a 86-bit processor to be able to run a 64-bit OS? I feel like it would be ridiculous to sell modern processors that are capped at 4GB of RAM? Thanks. Quad-Core Intel i7 Q 720 @ 1.6GHz

    Read the article

  • How to set up multi users on dev server with git and github

    - by Derek Organ
    I'm working on lamp application. We have 2 servers (Debian) Live and Dev. I constantly work on dev main to add new features and fix bugs. When happy all works well I scp the relevant code to the Live system. Database (mysql) is local to each machine. Now this is pretty basic setup really and I want to improve the workflow a bit. I use git and github for version control. Admittedly I've only really used one branch. Their can be 3 different developers who work on the code at different times. We all use the same linux username to connect to the dev server and edit the code directly when needed. I usually then commit and push the code at the end of the day to github. One thing to bare in mind is it isn't easy to run this code on a local machine as there are many apache and subdomain configurations that wouldn't work on a local machine so it is important to work on the dev server not locally. I need to create a new process because we need to have a main trunk now and a branch with a big code re-write. What is the best way to do this. Should I create different unix logins for each developer and set up different working areas on the dev server for there changes? e.g. /var/www/mysite_derek /var/www/mysite_paul /var/www/mysite_mike my thinking is they can do a pull from the main branch and then create there own branch and merge it back in. I'm not sure how this will work though with git locally and with github. will i need to create different github user accounts as well. I'd like to do this the 'right' way and future proof for having lots of potential developers but I also don't want to over complicate it. I simple and elegant solution is preferred. any recommendations or suggestions?

    Read the article

  • Sendmail : ignore local delivery

    - by Derek Organ
    I've a Ubuntu webserver with Sendmail as my MTA Currently when i email outside my webserver's domain e.g. example.com to something like gmail or any other email outside the example.com domain it works perfectly. I don't want my sendmail daemon to recognize example.com as a local address I want it to send to example.com the same way any other email is sent. There will never be a case were i will use the local users on the webserver to collect these emails for example.com. So how can I disable local delivery?

    Read the article

  • Private subnet for VM server host-only network

    - by Derek Pressnall
    At my current job, we distribute a product based on a Linux server with multiple VMs defined (using KVM / libvirt). We are planning to expose limited ports to the customer's network, and use iptables to direct inbound traffic to the appropriate internal VM. My question: is there a class of private subnets that I can use for the internal host-only network that is least likely to conflict with a client IP subnet? Specifically, if I choose a /24 out of any of the RFC-1918 defined private subnets (such as 192.168.x.x), there is a chance of conflicting with a customer-used range. I noticed that several current VM implementations default to 192.168.122.x -- is this due to an RFC that I'm not familiar with, and therefore this is a safe range to use (that most network admins would avoid)? Or did the various VM vendors just pick that range randomly? I guess I'm looking for an IP range that is more private than the existing private (RFC1918) addresses. The only other thought I had was to use one of the "Test Net" IP ranges reserved for documentation purposes (RFC 5737). Note, that I'm not worried about a customer's network blocking these IPs, as this is only internal to our server (packets get NATted before leaving the box). However this does seem more unorthodox than just sticking with the default 192.168.122.x/24 subnet.

    Read the article

  • Podcast Show Notes: Evolving Enterprise Architecture

    - by Bob Rhubart
    The latest series of ArchBeat podcast programs grew out of another virtual meet-up, held on March 11. As with previous meet-ups, I sent out a general invitation to the roster of previous ArchBeat panelists to join me on Skype to talk about whatever topic comes up. For this event, Oracle ACE Directors Mike van Alst and Jordan Braunstein  showed up, along with Oracle product manager Jeff Davies.  The result was an impressive and wide-ranging discussion on the evolution of Enterprise Architecture, the role of technology in EA, the impact of social computing, and challenge of having three generations of IT people at work in the enterprise – each with different perspectives on technology. Mike, Jordan, and Jeff talked for more than an hour, and the conversation was so good that slicing and dicing it to meet the time constraints for these podcasts has been a challenge. The first two segments of the conversation are now available. Listen to Part 1 Listen to Part 2 Part 3 will go live next week, and an unprecedented fourth segment will follow. These guys have strong opinions, and while there is common ground, they don’t always agree. But isn’t that what a community is all about? I suspect that you’ll have questions and comments after listening, so I encourage you to reach out to Mike, Jordan, and Jeff  via the following links: Mike van Alst Blog | Twitter | LinkedIn | Business |Oracle Mix | Oracle ACE Profile Jordan Braunstein Blog | Twitter | LinkedIn | Business | Oracle Mix | Oracle ACE Profile Jeff Davies Homepage | Blog | LinkedIn | Oracle Mix (Also check out Jeff’s book: The Definitive Guide to SOA: Oracle Service Bus)   Coming Soon ArchBeat’s microphones were there for the panel discussions at the recent Oracle Technology Network Architect Days in Dallas and Anaheim. Excerpts from those conversations will be available soon. Stay tuned: RSS Technorati Tags: oracle,otn,enterprise architecture,podcast. arch2arch,archbeat del.icio.us Tags: oracle,otn,enterprise architecture,podcast. arch2arch,archbeat

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >