Search Results

Search found 1454 results on 59 pages for 'martin anderson'.

Page 6/59 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • Mandatory Parameters in Request object (WCF)

    - by Martin Moser
    lHi, I'm currently writing a WCF service. One of those methods get's a request object and returns a response object. In the request there are a couple of value-type members. Is there a way to define members are mandatory in the declarative way? I'm in an early stage of development and I don't want to start with versioning now. In addition I don't want to have method sig with 25 parameters, therefore I created the request object. The problem I have is that due to the value-types, I can never be sure if the consumer of the service intended to have the default value in there, or it was just by lazyness. On consumer side you don't easily detect that you probably missed that property. So I would like to have something that forces the caller of the service to provide an value, and if not he ideally get's a compile-time error. any ideas? tia, Martin

    Read the article

  • VBScript - copy files modified in last 24 hours

    - by Martin North
    Hi, I'm trying to copy files from a directory where the last modified date is within 24hours of the current date. I'm using a wildcard in the filepath as it changes every day I'm using; option explicit dim fileSystem, folder, file dim path path = "d:\x\logs" Set fileSystem = CreateObject("Scripting.FileSystemObject") Set folder = fileSystem.GetFolder(path) for each file in folder.Files If DateDiff("d", file.DateLastModified, Now) < 1 Then filesystem.CopyFile "d:\x\logs\apache_access_log-*", "d:\completed logs\" WScript.Echo file.Name & " last modified at " & file.DateLastModified end if next Unfortunately this seems to be copying all files, and not just the recently modified ones. Can anyone point me in the right direction? many thanks Martin.

    Read the article

  • Simplest way to use a DatagridView with Linq to SQL

    - by Martín Marconcini
    Hi, I have never used datagrids and such, but today I came across a simple problem and decided to "databind" stuff to finish this faster, however I've found that it doesn't work as I was expecting. I though that by doing something as simple as: var q = from cust in dc.Customers where cust.FirstName == someString select cust; var list = new BindingList<Customer>(q.ToList()); return list; Then using that list in a DataGridView1.DataSource was all that I needed, however, no matter how much I google, I can't find a decent example on how to populate (for add/edit/modify) the results of a single table query into a DataGridView1. Most samples talk about ASP.NET which I lack, this is WinForms. Any ideas? I've came across other posts and the GetNewBindingList, but that doesn't seem to change much. What am I missing (must be obvious)? Thanks in advance. Martin.

    Read the article

  • Which data structure for List of objects + datagrid wiev

    - by Martin
    Hi, I have to develop a code which will store a list of objects, as example below 101, value 11, value 12, value 13 ...etc 102, value 21, value 22, value 23 ...etc 103, value 31, value 32, value 33 ...etc 104, value 41, value 42, value 43 ...etc Now, the difficulty is, that first column is an identifier, and whole table should always be sorted by it. Easy access to each element is required. Additionally, list should be easily updated, and extended by adding element at the end as well as in front and still keep being sorted by first column. Finally, I would like to be able to display values of the above in datagridview. What is most important is a performance of the implementation, as rows will be updated many times per second, and datagridview should be able to display all changes immediately. I was thinking about creating class for the values, and then a Dictionary but encountered a problem with displaying values in gridview. What would be the most optimal way of implementing the code? Thanks in advance Martin

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • Windows Installer using usb drive for temp purposes

    - by Douglas Anderson
    When installing apps that are built around Windows Installer, it would appear that it often uses my external usb hard disk (when it's connected) as the temp location while it expands and installs the application (creates a folder off the root with a guid name). Is there anyway to change this so it always defaults to a specific drive? This appears to be the case on Windows Vista and 7, not sure about previous releases. EDIT: Current environment variables look like this: TEMP=C:\Users\<me>\AppData\Local\Temp TMP=C:\Users\<me>\AppData\Local\Temp EDIT: I have a funny suspicion that it's using the drive with the largest available free space.

    Read the article

  • Exim - send certain "local" user mail to smtp

    - by Ryan Anderson
    I'm using Exim 3 and would like to know how to send some local addresses to the smtp server instead of Exim handling them as a localuser. They are local addresses in the sense that they have the same domain as listed in 'local_domains' in exim.conf. I tried using the "require_files" option on the localuser director in exim.conf, but with no luck. Any help appreciated. Thanks, Ryan

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • Running Safari from the command line adds current directory to the URL

    - by Charles Anderson
    I am trying to run the Safari browser (on Mac OS 10.4) from the command line, as follows: /Applications/Safari.app/Contents/MacOS/Safari http://localhost/dev/myfile.html However, Safari starts up and tries to access file:///Users/charlesanderson/scripts/http://localhost/dev/myfile.html /Users/charlesanderson/scripts happens to be my current directory. Can someone explain why Safari does this? Firefox is much better behaved?

    Read the article

  • What configuration changes can I make to speed up extremely slow Windows VM's in ESXi 4.0.

    - by Shawn Anderson
    I've recently moved from VMWare Server to ESXi 4.0. Running on Dell T310. My VM's have been restored but they are running dog slow compared to VMWare Server. I loaded ESXi 4.0 using only default values. Where are some areas where I can tweak the performance? Even logging onto the VM's can be extremely sluggish. Trying to install software on any of them is a new experience in pain. Dell PowerEdge T310 Xeon X3460 2.80 GHz 32 GB RAM 1 HD (2 TB) I have 16 VM's on this server, but only six or so will be running during my testing. I keep an eye on the Resource Allocation and Performance tabs for the host and I never see CPU or RAM getting anywhere close to pegged. Events tab does show some notices for video RAM issues and some hints on Windows activation issues, but nothing that would point to the sort of sluggishness that I'm experiencing. 1 Windows Server 2008 R2 (64-bit) - 4 GB RAM 1 Windows 7 (32-bit) - 2 GB RAM 1 Vista (32-bit) - 1 GB RAM 3 XP (32-bit) - 1 GB RAM Over to you! Thanks - Shawn

    Read the article

  • How well will ntpd work when the latency is highly variable?

    - by JP Anderson
    I have an application where we are using some non-standard networking equipment (cannot be changed) that goes into a dormant state between traffic bursts. The network latency is very high for the first packet since it's essentially waking the system, waiting for it to reconnect, and then making the first round-trip. Subsequent messages (provided they are within the next minute or so) are much faster, but still highly-latent. A typical set of pings will look like 2500ms, 900ms, 880ms, 885ms, 900ms, 890ms, etc. Given that NTP uses several round trips before computing the offset, how well can I expect ntpd to work over this kind of link? Will the initially slow first round trip be ignored based on the much different (and faster) following messages to/from the ntp server? Thanks and Regards.

    Read the article

  • nginx - Redirect specific page paths to https while keeping everything else on http (in a single server call)?

    - by Kris Anderson
    From what I've gathered so far it's clear that running if statements in nginx should be avoided at all costs. Most of the examples I've found so far regarding specific page redirects involve multiple servers being used. But, isn't that a bit wasteful? I'm not sure, but I would think multiple servers to accomplish this would be somewhat slower then a single server when under heavy load. My current server call is this: server { listen 10.0.0.60:80; listen 10.0.0.60:443 default ssl; #other code } What I want to do is redirect certain http requests to https requests. For example, I want /login/ and /my-account/ to always be forced to use SSL. If you're on /help/ though, I want that served over the default http. Is there a way to accomplish this within a single server call? Or is there no downside to using 2 server calls to get this working? nginx seems to be under pretty active development and a lot of the older guides I've followed were from times when you couldn't listen to requests for port 80 and 443 within the same server call. But now that nginx has been updated to support that (I'm running 1.2.4), I'm wondering if there's a "best practice" way of handling this today. Any help would be greatly appreciated. EDIT: I did find this guide: http://redant.com.au/blog/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ and I updated my code as follows: map $uri $my_preferred_proto { default "http"; ~^/#/user/login "https"; } server { listen 10.0.0.60:80; ## listen for ipv4; this line is default and implied listen 10.0.0.60:443 default ssl; if ($my_preferred_proto = "none") { set $my_preferred_proto $scheme; } if ($my_preferred_proto != $scheme) { return 301 $my_preferred_proto://mysite.com$request_uri; } It's not working though. When I change the default to https everything is redirected to SSL so it does somewhat work. But the redirect of /#/user/login is not redirecting to HTTPS. Any ideas? Also, is this a good way to go about this?

    Read the article

  • How to create a readonly root linux: Can be mounted as writeable for persistent changes?

    - by Mr Anderson
    I'd like a read only file system that runs almost entirely in RAM but the compact flash or hardrive can be mounted and made writeable to make persistent changes. How do I do this on Linux? I've looked at several tutorials but none really explain how to create such a system with the option of being able to mount the storage device and make persistent changes. I looked at this so far: http://chschneider.eu/linux/thin_client/ I also looked on the old gentoo wiki but the article was very specific to Gentoo. I'll be using a debian based Linux but it would be nice I've someone could explain to me how to do this in pretty generic instructions ,that would work on any Linux distro. Thanks.

    Read the article

  • Green System Administrator looking for helpful tips

    - by Joshua Anderson
    I have just been promoted to Systems Administrator for our product. We are designing a application that communicates with the cloud(Amazon EC2). I will be in charge of maintaining all Instances and their underlying components. So far this involves a set of load balanced services instances that connect to a central DB in a multi-tennant DB design. Im interested in what other Sys. Admins have discovered as invaluable tools or practices. Any resources provided will be greatly appreciated.

    Read the article

  • Can I cancel a resize operation in GParted without causing data loss?

    - by Anderson Green
    I'm currently waiting for GParted to finish resizing a partition, but the progress bar is currently at 0, and it's been taking much longer than usual (perhaps an hour). Is it safe to cancel the resize operation? I don't want to wait days for the resize operation to complete, but I don't want to lose all of my files either. (Is there any way that I can simply pause the resize operation, attempt to recover files, and then resume the resize operation?) (An update: the operation has finally completed, and my files are still intact!)

    Read the article

  • How to troubleshoot Hyper-V VSS writer causing backup failure on Server 2008 R2

    - by Tim Anderson
    I have a Windows Server 2008 R2 machine running Hyper-V. Backups using Windows Server Backup fail with the error: The backup operation that started at '?2011?-?01?-?02T10:37:01.230000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. I have traced this to a problem with the Hyper-V VSS writer. vssadmin list writers reports: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {fcf0dd79-d282-4465-88ae-7b6857e055c2} State: [8] Failed Last error: Inconsistent shadow copy However I can't get any further. A few relevant facts: I get the error even if all the VMs are shut down If I disable the Hyper-V VSS Writer by stopping the Hyper-V Management Service backup completes OK There are no errors in the Hyper-V-VMMS application log I tried to set tracing for VSS but can't get any output for some reason. I set the correct registry entries but no trace log is generated. Tim

    Read the article

  • Setup a new domain controller over a temporary VPN, but now Windows delays startup?

    - by Kris Anderson
    I'm migrating servers from colo locations to Amazon's VPC EC2 instances. If anyone hasn't worked with Amazon VPC before, VPN is a pain in the arse! Anyways, I setup a new server that acts as the domain controller for our Amazon VPC. In order to migrate all the user accounts from our existing domain controllers I manually connected to our colo VPN using my user account on the new Amazon EC2 machine. I was able to join the domain and the new Amazon server became another domain controller on our network. So far so good. The problem I'm having is that when booting the EC2 domain controller (which is no longer connected to the VPN so it can't communicate with the existing controllers), it takes a good 6-8 minuted before I can remote into the server (instead of the 1-2 minutes it should take). Also, during this time most of the services we also run (like IIS) also give 404 errors until the 6-8 minutes have passed. It's almost like the domain controller is attempting to reach the other domain controllers first and after 6-8 minutes it falls back to the one located on the local machine? I don't think that's what's happening though, because Server 2008 R2 doesn't have primary and backup domain controllers. They're all equal as far as Windows is concerned. For my network adapter I have only one DNS listed, 127.0.0.1, so it should be looking up the local domain controller and not the other domain controllers it connected to over VPN when VPN was enabled. In the server logs I'm seeing these warnings pop up during a reboot: The winlogon notification subscriber is taking long time to handle the notification event (CreateSession). The winlogon notification subscriber took 409 second(s) to handle the notification event (CreateSession). Any ideas on what's happening here? I would try removing the existing domain controllers from the new Amazon EC2 machine, but I still need to connect over VPN a few times to migrate some data between the servers, and I don't want that change being reflected back to the other domain controllers in our colo locations.

    Read the article

  • How to backup/restore OSX Parental Controls before/after complete reimage?

    - by Jim Anderson
    We typically "nuke and pave" users Mac OSX laptops if they have software issue. Prior to doing so, we backup the primary (non-admin) user's home folder. Our standard image has four accounts: Admin (uber admin user); Parent (admin account for the parents of students); Loaner (so our standard image will also work for our loaner laptop pool); Student (this is the primary, non-admin user of the laptop) Our standard image has only minimal Parental controls on the Loaner and Student accounts. Some parents choose to tighten the parental controls. We never know when parents have made changes to parental controls, or what those changes are. Once we have reimaged the machine with our standard image (minimal parental controls) we would like to be able to restore any custom parental controls parents may have placed on their student's account. Any help in this would be appreciated. Thanks.

    Read the article

  • MSSQL Auditing Recomendations

    - by Josh Anderson
    As an aspiring DBA, I have recently been asssigned the task of implementing the tracking of all data changes in the database for a peice of software we are developing. After playing with microsoft's change data capture methods, Im looking into some other solutions. We are planing to distribute our product as a hosted solution and unlimited installations would be desired for maximum scalability. Ive looked at IBM's Guardium as well as DB Audit by SoftTree. Im curious if anyone has any solutions they may have used in the past or possibly any suggestions or methods to achieve complete, and of course cost effective, auditing of data changes.

    Read the article

  • What is the harm in giving developers read access to application server application event logs?

    - by Jim Anderson
    I am a developer working on an ASP.NET application. The application writes logging messages to the Windows event log - a custom application log just for this application. However, I do not have any access to testing or staging web/application servers. I thought an admin could just give me read access to this event log to help in debugging problems (currently a service that is working in dev is not working in test environment and I have no idea why) but that is against my client's (I'm a consultant) policy. I feel silly to keep asking an admin to look at the event log for me. What is the harm in giving developers read access to application server application event logs? Is there a different method of application logging that sysadmins prefer programmers use? Surely, admins don't want to be fetching logging messages for developers all the time.

    Read the article

  • Fax in Small Business Server 2003 fails to hang up

    - by Tim Anderson
    We have a problem with fax in SBS 2003. External modem, Courier v Everything which is a recommended model. It receives a fax OK, but then sometimes (quite often) fails to hang up. Attempts to send faxes thereafter get an engaged tone. The only fix when this happens is to reset the fax modem. Even restarting the server is not enough. We've tried with a different model modem, same problem. Any ideas? Tim

    Read the article

  • Accessing multiple local HD's or RAID with ESXi 4.0

    - by Shawn Anderson
    How to I get additional HD's to be recognized and used by ESXi 4.0. When I purchased my system I had two 2TB HD's, but when I installed ESXi it only recognized one of them. I'm happy to get whatever number of drives that I need (I have a four bay SATA in my Dell T310). What are some options? RAID? If so, is it supported. I guess I would need hardware instead of software since ESXi is so small. The VMWare forums (where I've lived for the last two days) are a charlie foxtrot of outdated and conflicting info. I want to utilize my T310, with 32 GB RAM, 2.8GHz quad core to run many lab Windows VM's. I don't need production level availability but I do want decent performance, even though it's in a lab environment. A huge thanks to Jim B., Zypher, Helvick, and Jeff Hengesbach who posted answers to my earlier predecessor question on why ESXi was so sluggish.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >