Search Results

Search found 44742 results on 1790 pages for 'create'.

Page 535/1790 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • Cloudmin KVM DNS hostnames not working

    - by dannymcc
    I have got a new server which has Cloudmin installed. It's working well and I can create and manage VM's as expected. The server came with a /29 subnet and I requested an additional /29 subnet to allow for more virtual machines. I didn't want to replace the existing /29 subnet with a /28 because that would have caused disruption with my existing VM's. To make life easier I decided to configure a domain name for the Cloudmin host server to allow for automatic hostname setup whenever I create a new virtual machine. I have a domain name (example.com) and I have created an NS record as follows: NS kvm.example.com 123.123.123.123 A kvm.example.com 123.123.123.123 In the above example the IP address is that of the host server, I also have two /29 subnets routed to the server. Now, I've added the two subnets to the Cloudmin administration panel as follows: I've tried to hide as little information as possible without giving all of the server details away! If I ping kvm.example.com I get a response from 123.123.123.123, if I ping the newly created virtual machine (example.kvm.example.com) it fails, and if I ping the IP address that's been assigned to the new virtual machine (from the second subnet) it fails. Am I missing anything vital? Does this look (from what little information I can show) like it's setup correctly? Any help/pointers would be appreciated. For reference the Cloudmin documentation I am using as a guide is http://www.virtualmin.com/documentation/cloudmin/gettingstarted

    Read the article

  • User cannot access a system DSN on Windows Server 2008

    - by Ra Osolage
    We run our SQL Server services using a low privileged domain account. That account is NOT a local admin on the OS. Only access I give the user account is assigned during install of SQL: full control over its mount points and then everything else is granted by the SQL Server 2005/2008 installer. I need to create a linked server in SQL Server 2008 to an ODBC data source. So I remoted into the computer using my domain account, which is part of a group that DOES have local admin privs to the OS. I created a system DSN and configured it to connect to another SQL Server. The DSN works perfectly when I test it. However, when I try to create the linked server, I get an error. It appears to me that the DSN is invisible to the domain account that SQL Server is running as. It seems that this problem is only happening to me on Windows 2008 servers. Does anybody know whether there's anything that you need to do after creating a DSN to make it visible for other users to access?

    Read the article

  • Unable to back up SQL Server databases using a maintenance plan

    - by Diogo Lopes
    I am trying to create a maintenance plan that will run automatically and back up my SQL Server 2005 databases automatically. I create a new maintenance plan and add a "Back Up Database Task", select all User databases, and choose a path to back up to. IMAGE in http://www.freeimagehosting.net/uploads/16be7dce43.jpg [new user limitation] When I save and try to execute this plan, I get the following error message: =================================== Execution failed. See the maintenance plan and SQL Server Agent job history logs for details. =================================== Job 'Backup.Subplan_1' failed. (SqlManagerUI) I've checked the maintenance plan log, the agent log, and just about every log file I can find and there are no entries at all to help me figure out why this is failing. If I right-click on a specific database and select "Back Up", the task succeeds. I tried changing the plan to back up just that one database and it still failed. I've tried running the plan with both Windows authentication and SQL Server authentication with the sa account. I also tried specifically granting the SQL Server Agent user account full privileges on the backup folder, but it still failed. Thanks for any suggestions!

    Read the article

  • SSL Certifcate Request s2003 DC CA DNS Name not Avaiable.

    - by Beuy
    I am trying to submit a request for an SSL certificate on a Domain Controller in order to enable LDAP SSL, and having no end of problems. I am following the information provided at http://support.microsoft.com/default.aspx?scid=kb;en-us;321051 & http://adldap.sourceforge.net/wiki/doku.php?id=ldap_over_ssl Steps taken so far: Create Servername.inf with the following information ;----------------- request.inf ----------------- [Version] Signature="$Windows NT$ [NewRequest] Subject = "CN=servername.domain.loc" ; replace with the FQDN of the DC KeySpec = 1 KeyLength = 1024 ; Can be 1024, 2048, 4096, 8192, or 16384. ; Larger key sizes are more secure, but have ; a greater impact on performance. Exportable = TRUE MachineKeySet = TRUE SMIME = False PrivateKeyArchive = FALSE UserProtected = FALSE UseExistingKeySet = FALSE ProviderName = "Microsoft RSA SChannel Cryptographic Provider" ProviderType = 12 RequestType = PKCS10 KeyUsage = 0xa0 [EnhancedKeyUsageExtension] OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication ;----------------------------------------------- Create Certificate request by running: certreq -new Servername.inf Servername.req Attempt to submit Certificate request to CA by running: certreq -submit -attrib "CertificateTemplate: DomainController" request.req At which point I get the following error: The DNS name is unavailable and cannot be added to the Subject Alternate Name. 0x8009480f (-2146875377) Trouble shooting steps I have taken so far 1. Modify the Domain Controller Template to supply Subject Name in Request restart Certificate Service, include SAN in Request, same error. 2. Re-installed Certificate Services / IIS / Restarted machine countless times Any help resolving the issue would be greatly appreciated.

    Read the article

  • Error when reloading supervisord: unix:///tmp/supervisor.sock no such file

    - by Yarin
    I'm running supervisord on my CentOS 6 box like so, /usr/bin/supervisord -c /etc/supervisord.conf and when I launch supervisorctl all process status are fine, but if I try to reload using supervisorctl I get unix:///tmp/supervisor.sock no such file I'm using the same config file I've used successfully on other boxes, and im running everything as root. I can't undesrtand what the problem is... Config file: ; Sample supervisor config file. [unix_http_server] file=/tmp/supervisor.sock ; (the path to the socket file) ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) ;[inet_http_server] ; inet (TCP) server disabled by default ;port=127.0.0.1:9001 ; (ip_address:port specifier, *:port for all iface) ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) [supervisord] logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log) logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB) logfile_backups=10 ; (num of main logfile rotation backups;default 10) loglevel=info ; (log level;default info; others: debug,warn,trace) pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid) nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) ;umask=022 ; (process file creation umask;default 022) ;user=chrism ; (default is current user, required if root) ;identifier=supervisor ; (supervisord identifier, default is 'supervisor') ;directory=/tmp ; (default is not to cd during start) ;nocleanup=true ; (don't clean up tempfiles at start;default false) ;childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP) ;environment=KEY=value ; (key value pairs to add to environment) ;strip_ansi=false ; (strip ansi escape codes in logs; def. false) ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be ; added by defining them in separate rpcinterface: sections [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket ;username=chris ; should be same as http_username if set ;password=123 ; should be same as http_password if set ;prompt=mysupervisor ; cmd line prompt (default "supervisor") ;history_file=~/.sc_history ; use readline history if available ; The below sample program section shows all possible program subsection values, ; create one or more 'real' program: sections to be able to control them under ; supervisor. ;[program:foo] ;command=/bin/cat [program:embed_scheduler] command=/opt/web-apps/mywebsite/custom_process.py process_name=%(program_name)s_%(process_num)d numprocs=3 ;[program:theprogramname] ;command=/bin/cat ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=999 ; the relative start priority (default 999) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (default 10) ;stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions (def no adds) ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample eventlistener section shows all possible ; eventlistener subsection values, create one or more 'real' ; eventlistener: sections to be able to handle event notifications ; sent by supervisor. ;[eventlistener:theeventlistenername] ;command=/bin/eventlistener ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;events=EVENT ; event notif. types to subscribe to (req'd) ;buffer_size=10 ; event buffer queue size (default 10) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=-1 ; the relative start priority (default -1) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups ; # of stderr logfile backups (default 10) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample group section shows all possible group values, ; create one or more 'real' group: sections to create "heterogeneous" ; process groups. ;[group:thegroupname] ;programs=progname1,progname2 ; each refers to 'x' in [program:x] definitions ;priority=999 ; the relative start priority (default 999) ; The [include] section can just contain the "files" setting. This ; setting can list multiple files (separated by whitespace or ; newlines). It can also contain wildcards. The filenames are ; interpreted as relative to this file. Included files *cannot* ; include files themselves. ;[include] ;files = relative/directory/*.ini

    Read the article

  • zfs setup question

    - by Staale
    Currently I have a linux storage box and server with 4x750gb harddrives in raid-5 with ext3. I have ordered 3x1.5tb disks to upgrade this. Here is my planned upgrade: Backup: Format the 1.5 tb disks Copy all data from the raid-5 disks to the 1.5tb disks Destroy the raid-5 array. New setup: Create a VirtualBox system and install Nexenta (OpenSolaris + ubuntu) on it. Create a zfs pool with zraid1 with the 4 750gb disks. Copy from 1.5tb disks to the virtualbox zfs pool Format the 1.5tb disks. Replace 3 off the 750gb disks with 1.5tb disks. Reuse the 750gb disks elsewhere. The reason I wish to use one 750gb disk is since I can't grow the disk count in a raidz array, and this gives me the option off replacing that disk later for an extra 750gb storage. Would the ZFS performance be good running through virtualbox? Or will the performance overhead be too large? Will I get 1.5tb+1.5tb+750gb storage on the zraid? Or just 750gbx3 until all disks are 1.5tb?

    Read the article

  • Unable to connect Xend with virt-manager

    - by Majid Azimi
    I have installed debian 6.0.1a. I have install all XEN stuff. including xen kernel, libvirtd, ... but when i want to connect xend, virt-manager shows me this: Verify that: A Xen host kernel was booted The Xen service has been started details: Unable to open connection to hypervisor URI 'xen:///': unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/connection.py", line 971, in _try_open None], flags) File "/usr/lib/python2.6/dist-packages/libvirt.py", line 111, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirtError: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied here is uname output: Linux debian 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux and also xend and libvirtd is runnig: root@debian:/home/mazimi# /etc/init.d/libvirt-bin status Checking status of libvirt management daemon: libvirtd running. root@debian:/home/mazimi# /etc/init.d/xend start Starting Xen daemons: xenstored xenconsoled xend. permissions for livbirt-sock: root@debian:/home/mazimi# ls -alih /var/run/libvirt/ total 12K 671017 drwxr-xr-x 3 root root 4.0K Apr 15 13:54 . 654083 drwxr-xr-x 18 root root 4.0K Apr 15 13:54 .. 670901 srwxrwx--- 1 root libvirt 0 Apr 15 13:54 libvirt-sock 670928 srwxrwxrwx 1 root libvirt 0 Apr 15 13:54 libvirt-sock-ro 670870 drwxr-xr-x 2 root root 4.0K Apr 15 02:34 qemu and also we have group named libvirt in /etc/group When running libvirtd with verbose mode it behaves kind of stange: root@debian:/var/log/libvirt# /usr/sbin/libvirtd --verbose 17:26:55.841: warning : qemudStartup:1832 : Unable to create cgroup for driver: No such device or address 17:26:56.128: warning : lxcStartup:1900 : Unable to create cgroup for driver: No such device or address and waits infinitely.

    Read the article

  • ESXi 4.0 - cannot copy files

    - by Peter
    I am unable to copy files or make directories on my installation of VMWare ESXi 4.0. I have done so in the past (copied an iso onto a datastore). But something has changed and I have no idea what. I cannot copy using the datastore browser (get a dialog saying "Expected a PUT_FILE_DONE message. Got SESSION_COMPLETE"). I cannot create a directory through datastore browser (get a dialog saying "Cannot complete file creation operation"). When I ssh to the ESXi server I cannot create files or folders under /vmfs/volumes. But I can manipulate files elswhere (including /vmfs). Here are the permissions for the directories (I am logged in as root). ~ # ls -lh /vmfs/volumes/ drwxr-xr-t 1 root root 1.2k Sep 3 12:19 4a76f260-36b7eb85-c3b3-0024e8314929 drwxr-xr-x 1 root root 8 Jan 1 1970 4a76f261-d6190a9e-3b89-0024e8314929 drwxr-xr-t 1 root root 1.4k Sep 22 10:38 4a76f262-4ac21f0a-6bc1-0024e8314929 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor1 - c42ce27f-eb8d7f70-7f70-0e7a85e8edc4 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor2 - bbf1477b-4aec1d8c-caa5-5e8720bebd85 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor3 - efd8efe3-03bc1cbf-15e0-080efd9e7379 drwxr-xr-x 1 root root 8 Jan 1 1970 bbf1477b-4aec1d8c-caa5-5e8720bebd85 drwxr-xr-x 1 root root 8 Jan 1 1970 c42ce27f-eb8d7f70-7f70-0e7a85e8edc4 l--------- 0 root root 1.9k Jan 1 1970 datastore1 - 4a76f260-36b7eb85-c3b3-0024e8314929 l--------- 0 root root 1.9k Jan 1 1970 datastore2 - 4a76f262-4ac21f0a-6bc1-0024e8314929 drwxr-xr-x 1 root root 8 Jan 1 1970 efd8efe3-03bc1cbf-15e0-080efd9e7379 ~ # touch /vmfs/foo.txt ~ # touch /vmfs/volumes/foo.txt touch: /vmfs/volumes/foo.txt: Operation not permitted I've googled and found nothing helpful. Does anyone out there have an idea as to what is going on? Thanks in Advance. Pete.

    Read the article

  • Scheduled tasks fail to start unless I'm logged in to the server

    - by Chuck
    Tasks need to open a CMD window and pass net use commands, then do a DIR command, pipping the output to a file on the server. Log in as either me (Sysadmin) or with one of the system accounts and task will only run if I'm physically logged into the server. Run as batch file is set in security properties for both users (me and service account), security is granted to all directories, etc. It almost acts like a scheduled task, since it is not physically connected to a display can't create a CMD window and pass the WinID so the command can be sent. I'm guessing. Anyone know of a document that explains how the server handles initiation of a window if done via scheduled task and no attached user is associated with the task? If I log onto the box and run the scheduled tasks they run fine, but produce no errors or event log entries and then just show that it ran successfully and sets the next run time. Have tried both with the run if logged in checkbox on and off and makes no difference. Other tasks work fine, except that they are acting on local drives with no display writing or updating taking place, so I'm guessing the system either can't instantiate a window if no display is connected to a logged on user, or it can't establish a point if it is trying to create a virtual screen. You'd think it is just creating a memory map and then mapping it to a device to display, but that doesn't seem to be the case, but I can find no documentation on how the system handles a scheduled task and how to invoke a fake or virtual screen that it could write to so it appears that a user was connected. Thanks This is driving me nuts and I've tried everything I can think of as well as our network boys ideas and nothing seems to work.

    Read the article

  • Preseeding Ubuntu partman recipe using LVM and RAID

    - by Swav
    I'm trying to preseed Ubuntu 12.04 server installation and created a recipe that would create RAID 1 on 2 drives and then partition that using LVM. Unfortunately partman complains when creating LVM volumes saying there no partitions in recipe that could be used with LVM (in console it complains about unusable recipe). The layout I'm after is RAID 1 on sdb and sdc (installing from USB stick so it takes sda) and then use LVM to create boot, root and swap. The odd thing is that if I change the mount point of boot_lv to home the recipe works fine (apart from mounting in wrong place), but when mounting at /boot it fails I know I could use separate /boot primary partition, but can anybody tell me why it fails. Recipe and relevant options below. ## Partitioning using RAID d-i partman-auto/disk string /dev/sdb /dev/sdc d-i partman-auto/method string raid d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true #d-i partman-lvm/confirm boolean true d-i partman-auto-lvm/new_vg_name string main_vg d-i partman-auto/expert_recipe string \ multiraid :: \ 100 512 -1 raid \ $lvmignore{ } \ $primary{ } \ method{ raid } \ . \ 256 512 256 ext3 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext3 } \ mountpoint{ /boot } \ lv_name{ boot_lv } \ . \ 2000 5000 -1 ext4 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext4 } \ mountpoint{ / } \ lv_name{ root_lv } \ . \ 64 512 300% linux-swap \ $defaultignore{ } \ $lvmok{ } \ method{ swap } \ format{ } \ lv_name{ swap_lv } \ . d-i partman-auto-raid/recipe string \ 1 2 0 lvm - \ /dev/sdb1#/dev/sdc1 \ . d-i mdadm/boot_degraded boolean true #d-i partman-md/confirm boolean true #d-i partman-partitioning/confirm_write_new_label boolean true #d-i partman/choose_partition select Finish partitioning and write changes to disk #d-i partman/confirm boolean true #d-i partman-md/confirm_nooverwrite boolean true #d-i partman/confirm_nooverwrite boolean true EDIT: After a bit of googling I found below snippet of code from partman-auto-lvm, but I still don't understand why would they prevent that setup if it's possible to do manually and booting from boot partition on LVM is possible. # Make sure a boot partition isn't marked as lvmok if echo "$scheme" | grep lvmok | grep -q "[[:space:]]/boot[[:space:]]"; then bail_out unusable_recipe fi

    Read the article

  • How to install/upgrade to Windows 8.1 RTM without a Microsoft account

    - by abstrask
    When I installed Windows 8, I deliberately chose not to use a Microsoft account to sign in. I like to keep things separate, and just logon with a tradional local account. Any apps that require me to sign-in with my Live account, will have to prompt me to sign-in. Now, I just updated to 8.1, but towards the end of the setup process, I was asked to sign-in with a Microsoft account or create one. Unlike when installation Windows 8, there didn't seem to be any option to skip that step, or otherwise close the sign-in prompt and continue to my updated Windows installation. At least not that I could find. This is particular annyoing, when setting up computers for friends and family, whom I support. They may not have, or have any interest in getting, a Microsoft account and I'm reluctant to use my own. I realize I can disconnect my Microsoft account after the fact, but is there really no way to install, or upgrade to, Windows 8.1, without being forced to create a Microsoft account? If there is, how does one go a about that?

    Read the article

  • XAMPP - Apache service stops running after few seconds.

    - by Fábio Antunes
    Hello I have this big problem with my Xampp server, for some reason the Apache service stops running after a few seconds it as been started, and i have no idea what the problem is, and the error logs don't say much about the problem. [Fri May 07 01:09:32 2010] [notice] Digest: generating secret for digest authentication ... [Fri May 07 01:09:32 2010] [notice] Digest: done [Fri May 07 01:09:33 2010] [notice] Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations [Fri May 07 01:09:33 2010] [notice] Server built: Nov 11 2009 14:29:03 [Fri May 07 01:09:33 2010] [crit] (22)Invalid argument: Parent: Failed to create the child process. [Fri May 07 01:09:33 2010] [crit] (OS 6)O identificador é inválido. : master_main: create child process failed. Exiting. [Fri May 07 01:09:33 2010] [notice] Parent: Forcing termination of child process 36 identificador é inválido (pt_PT) = identifier is invalid. Note: No other applications is using the Apache port. I have done some changes to the httpd.conf file but, it as worked well for allot of time. Added some virtual hosts. Enabled xdebug. As this happen to anyone, that could tell me whats the problem? Thanks for your time.

    Read the article

  • Configure IPv6 routing

    - by godlark
    I've got IPv6 addresses from SIXXS. My host is connected with SIXXS network over a AICCU tunnel ("sixxs" interface). My host address is 2001:::2, the host on the end has address 2001:::1. On my host IPv6 is fully accessible. I have problem with configuring IPv6 network on VMs. I use VirtualBox, the VM (Ubuntu) uses tap1 (on the host bridged by br0) #!/bin/sh PATH=/sbin:/usr/bin:/bin:/usr/bin:/usr/sbin # create a tap tunctl -t tap1 ip link set up dev tap1 # create the bridge brctl addbr br0 brctl addif br0 tap1 # set the IP address and routing ip link set up dev br0 ip -6 route del 2001:6a0:200:172::/64 dev sixxs ip -6 route add 2001:6a0:200:172::1 dev sixxs ip -6 addr add 2001:6a0:200:172::2/64 dev br0 ip -6 route add 2001:6a0:200:172::2/64 dev br0 Host: routing table: 2001:6a0:200:172::1 dev sixxs metric 1024 2001:6a0:200:172::/64 dev br0 proto kernel metric 256 2001:6a0:200:172::/64 dev br0 metric 1024 2000::/3 dev sixxs metric 1024 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sixxs proto kernel metric 256 fe80::/64 dev br0 proto kernel metric 256 fe80::/64 dev tap1 proto kernel metric 256 default via 2001:6a0:200:172::1 dev sixxs metric 1024 Guest: interface eth1 (it is connected with tap1): auto eth1 iface eth1 inet6 static address 2001:6a0:200:172::3 netmask 64 gateway 2001:6a0:200:172::2 Guest: routing table 2001:6a0:200:172::/64 dev eth1 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth1 proto kernel metric 256 default via 2001:6a0:200:172::2 dev eth1 metric 1024 The guest pings to the host, the host pings to the guest, the host pings to 2001:6a0:200:172::1, but the guest doesn't ping to 2001:6a0:200:172::1. The guest tries to ping, on the host (by tcdump) I can capture its packets, but the host doesn't send them to 2001:6a0:200:172::1. What have I missed in configuration?

    Read the article

  • How can I map a Windows group login to the dbo schema in a database?

    - by Christian Hayter
    I have a database for which I want to restrict access to 3 named individuals. I thought I could do the following: Create a local Windows group on the database server and add the named individuals to it. Create a Windows login in SQL Server mapped to the local Windows group. Map the login to the "dbo" schema in the database, so that the users can access all objects without having to qualify them with the schema name. When I try to do step 3, I get the following error: Msg 15353, Level 16, State 1, Line 1 An entity of type database cannot be owned by a role, a group, an approle, or by principals mapped to certificates or asymmetric keys. I have tried to do this via the IDE, the sp_changedbowner sproc, and the ALTER AUTHORIZATION command, and I get the same error each time. After searching MSDN and Google, I find that this restriction is by design. Great, that's useful. Can anyone tell me: Why this restriction exists? It seems very arbitrary. More importantly, can I accomplish my requirement some other way? Other info that might be pertinent: The server is fully up to date with service packs and hotfixes. All objects in the database are owned by the "dbo" schema, and it's not feasible to change that. The database is running in compatibility level 80, and it's not feasible to change that to 90 yet. I am free to make any other changes (within reason, depending on what they are).

    Read the article

  • How do I add a VMware ESXi Host to Microsoft Virtual Machine Manager?

    - by user63250
    I am trying to manage virtual machines running on a VMware ESXi host using Microsoft System Center Virtual Machine Manager. I was able to add the ESXi machine using the "Add VMware VirtualCenter server" option, but can't access any of the VMs on the datastore associated with this ESXi server. The datastore of the ESXi box is showing up with the correct name, but it won't let me see any of the VMs that have already been created; I get "There are no virtual machines on this host." Because I couldn't get any of the existing virtual machines to show up, I tried creating some new ones. When using VMM to connect to ESXi and create new VMs, I get the following error messages in the "rating explanation" section: The virtualization software on the selected host does not support virtual hard disks on an IDE bus. and The virtualization software on the host XXXXXX does not support the creation of dynamic virtual hard disk. Any ideas on why I can't manage existing machines and why I can't create new ones? The existing machines were created in vSphere. I should note that the ESXi server and the server running SCVMM are both on the same domain. I should also note that although the ESXi box has been added as a VirtualCetner server, when I try to add it through the "Add Host" option, I get an error message saying "Virtual Machine Manager cannot complete the VirtualCenter action on server EXSi because of the following error: The operation is not supported on the object."

    Read the article

  • PPTP Client setup, Fedora 17

    - by Suarez Romina
    I am trying to connect to hidemyass.com VPN services via PPTP, but I am having issues understanding why it isn't working, since I don't get a warning or fatal error and my IP remains the same. This is how i create the connection: [root@lasvegas-nv-datacenter ~]# pptpsetup --create TUNNELNAME --server 199.58.165.20 --username MYUSERNAME --password MYPASSWORD --encrypt --start And this is the output: Using interface ppp0 Connect: ppp0 <-- /dev/pts/1 CHAP authentication succeeded MPPE 128-bit stateless compression enabled local IP address 10.200.21.14 remote IP address 10.200.20.1 After that, I check the log and this is what i get: [root@lasvegas-nv-datacenter ~]# tail -f /var/log/messages Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 1 'Start-Control-Connection-Request' Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:754]: Received Start Control Connection Reply Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:788]: Client connection established. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 7 'Outgoing-Call-Request' Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:873]: Received Outgoing Call Reply. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:912]: Outgoing call established (call ID 0, peer's call ID 20096). Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: CHAP authentication succeeded Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: MPPE 128-bit stateless compression enabled Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: local IP address 10.200.21.14 Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: remote IP address 10.200.20.1 Can someone help me? Basically, i Ieed to connect to the VPN and have my IP changed after the connection. I read a lot of guides but still cannot understand why I don't get a connection.

    Read the article

  • Endian Destination NAT

    - by Ben Swinburne
    I have installed Endian Community Firewall 2.3 and am clearly misunderstanding/doing something wrong with it. I'm trying to create some destination NAT rules to allow incoming connections to various services within the network. Router - RED I/F - x.x.x.x Router - GREEN I/F - 192.168.11.253 ECF - RED I/F - 192.168.11.254/24 ECF - GREEN I/F - 192.168.12.254/24 Target server - 192.168.12.1 Please ignore the haphazard choice of subnets and addresses- I'm trying to quickly plop Endian into an existing network before a complete rework in 6-12 months so for now. Everything works except destination NAT, so outgoing connections are fine, the routes between the two subnets are OK etc. I want to create various incoming NATs but let's take for the sake of argument, SMTP port 25 from the Internet to Target server 192.168.12.1. I've tried almost every combination of options in the Destination NAT section to achieve this and clearly am doing something wrong. I suspect my confusion must be somewhere in the Access From and/or Target section. The rest seems OK Filter Policy = Allow Service = SMTP Protocol = TCP Port = 25 Translate to type = IP DNAT Policy = NAT Insert IP = 192.168.12.1 Port Range = 25 Enabled = Checked Position = First I can't work out what I'm doing wrong, or am I doing it right and it's just not working!? Any help would be greatly appreciated.

    Read the article

  • MySQL socket connections working, but not port connections

    - by Neil
    I installed MySQL community 5.1.45 on my Snow Leopard 10.6, using the pkg from their site. I had previously installed a MySQL binary from entropy.ch. In the previous installation, the connections were working fine before I upgrade to Snow Leopard. In Snow Leopard, both the installations are problematic. Using an app called Sequel Pro, if I connect with the socket operation, it connects properly. However, a standard connection with the same credentials doesn't work. From what I've understood, socket connections happen on the machine itself between processes, whereas normal connections occur over the network/ports, in this case a loopback to my machine, since the server and client are both on the same machine. My new CakePHP installation isn't being able to connect to the db with the root credentials I provided. Btw, I've been starting the MySQL server using the Preference Pane. When I tried running mysqld from terminal, it gave me: 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test mysqld: Can't change dir to '/usr/local/mysql-5.1.45-osx10.6-x86_64/data/' (Errcode: 13) 100323 1:54:37 [ERROR] Aborting 100323 1:54:37 [Note] mysqld: Shutdown complete mbp is the name of my machine. How do I fix this so that my webserver can connect to the mysql server?

    Read the article

  • cpanel dns only / rdns questions

    - by Clear.Cache
    I started getting IPs from ARIN directly, instead of the data center I'm colocated at. Now I have to start applying rdns myself for my clients upon request, instead of having the NOC at the DC do this. That is obvious, since I am in full control over the IP delegation and therefore have nameserver authority. The question is, how do I "create" ptr / rdns records for my clients? My current server uses Cpanel / WHM with ns1/ns2.mycompany.com I also applied those as dns nameservers in the ARIN IP's whois record. How do I create rdns for my clients? Should I install Cpanel DNS Only on a entirely separate server and use this method instead? http://layer1.cpanel.net/ If so, how can I seamlessly transition over the dns records to that new dns server, retaining my ns1/ns2.mycompany.com and their ns1 and ns2 IP addresses? Even more important: I have to change the ns1/ns2 IPs to the new ones I retrieve from ARIN. How can this be done, avoiding downtime during the dns transition? On a side note, would it be easier to just install Cpanel DNS Only on a dedicated server and just use dns1.mycompany.com and dns2.mycompany.com with their own dedicated ns1/ns2 IPs from ARIN - and utilize this dns server for customers who request rdns? Would this be a more viable solution than using our current ns1/ns2.mycompany.com Nameservers? Is Cpanel DNS Only a standalone software that does not require Cpanel/WHM on another server? Is it possible to have redundant dns servers setup using this software solely, ns1 on one server and ns2 on another? Thanks.

    Read the article

  • Cisco SG200 vlan issue in ESXi VSA cluster

    - by George
    I have three Cisco SG200-26 switches, and I also have two ESXi hosts that I have connected like shown in the below "best practice" map by VMware: http://communities.vmware.com/servlet/JiveServlet/previewBody/17393-102-1-22458/VSA_networking_map.pdf Even though I created the VLANs in the SG200 and I set the two VLANs (508 and 608) as allowed for these untagged ports (where my ESX NIC's are connected), I can not ping from host 1 to host 2 when configuring the NIC's to use 608 VLAN. Am I missing something? my IP's are all in the 192.168. range, and the only reason I need the VLANs is to isolate the traffic of VSA back-end internally, only the two hosts will be using the VLANs. So I think I do not have to create virtual interfaces on my router since that's the case, is my understanding correct? Also sending my switch config screenshot below.. all 3 switches have the latest firmware (it seems these were originally linksys and got rebranded as cisco after the acquisition) http://img31.imageshack.us/img31/2503/switch.gif Any ideas what to change on the Cisco SG200 to make this work , would be appreciated! The second VLAN (608) only needs two IP's: 192.168.0.1 and 192.168.0.2 The first VLAN (508) will have about 15 IP's for ESXi Management and VSA cluster service, I could use either 192.168.1.xx or 10.0.1.xx The rest of my network (about 50 clients) is in 192.168.1.xx range VMware also states that the VLAN protocol on the physical switch must be 802.1Q, not ISL, anyone knows which of the two my SG200-26 uses? In addition to that, the only requirement from VSA is that my two hosts: -Are in the same subnet. -Have static IP addresses set. -Have the same Default Gateway configured. If I need inter-vlan routing for this, I suppose I have to create virtual interfaces on my sonicwall, and assign an IP for each VLAN, and then set routes between them? Thank you for your time!

    Read the article

  • Netscaler - Involving Response Body Replacement with GeoIP Implementation

    - by MrGoodbyte
    I have a Netscaler (10000) NS9.3: Build 52.3.cl There is a document contains a short tutorial about GeoIP database implementation at http://support.citrix.com/article/CTX130701 I have added location database by following this tutorial successfully but instead of dropping connections I need to replace a string in content body. To do that, I created a rewrite action with those parameters. * Name: ns_country_replacement_action * Type: REPLACE_ALL * Target Expression: HTTP.RES.BODY(50000) * Replacement Text: HTTP.RES.HEADER("international") * Pattern: \{\/\*COUNTRY_NAME\*\/\} To insert header called "international", I try to create another rewrite action with those parameters. I'll add this one's policy prior. * Name: ns_country_injection_action * Type: INSERT_HTTP_HEADER * Header Name: international * Header Value: CLIENT.IP.SRC.MATCHES_LOCATION("*.US.*.*.*.*").NOT When I click on create button, it says; Compound expression syntax error, [.*.*").NOT^, 50] I'm not sure but I think that two things might cause this error. The expression I use for MATCHES_LOCATION is wrong but I use same expression in the tutorial. MATCHES_LOCATION().NOT returns BOOLEAN but the field expects STRING. How can I get this work? Do I use right tools to accomplish what I need to do? Thank you. -Umut

    Read the article

  • VSFTP Users and Directories

    - by Mathew
    I'm stuck. I've been working all day on trying to figure out what I'm doing wrong and I've hit wall after wall. What I'm trying to do: Setup FTP in such a way that certain users have access only to their directory, but higher level users have access to all directories. What I've Googled so far: I started with this, but that didn't do what I needed it to. I then used this, but once I created one user, it wouldn't let me create another one. Finally, I decided to follow this, but it wouldn't let me even create one user. I'm using Ubuntu 10. I can login to ftp as a root user and it takes me to the home directory. If I try to login using the user I created in the tutorial it says: Status: Connection established, waiting for welcome message... Response: 220 (vsFTPd 2.2.2) Command: USER mathew Response: 331 Please specify the password. Command: PASS **** Response: 530 Login incorrect. Error: Critical error Error: Could not connect to server

    Read the article

  • BitLocker with Windows DPAPI Encryption Key Management

    - by bigmac
    We have a need to enforce resting encryption on an iSCSI LUN that is accessible from within a Hyper-V virtual machine. We have implementing a working solution using BitLocker, using Windows Server 2012 on a Hyper-V Virtual Server which has iSCSI access to a LUN on our SAN. We were able to successfully do this by using the "floppy disk key storage" hack as defined in THIS POST. However, this method seems "hokey" to me. In my continued research, I found out that the Amazon Corporate IT team published a WHITEPAPER that outlined exactly what I was looking for in a more elegant solution, without the "floppy disk hack". On page 7 of this white paper, they state that they implemented Windows DPAPI Encryption Key Management to securely manage their BitLocker keys. This is exactly what I am looking to do, but they stated that they had to write a script to do this, yet they don't provide the script or even any pointers on how to create one. Does anyone have details on how to create a "script in conjunction with a service and a key-store file protected by the server’s machine account DPAPI key" (as they state in the whitepaper) to manage and auto-unlock BitLocker volumes? Any advice is appreciated.

    Read the article

  • Windows Server 2003 R2 SP2 GPO Conditional Terminal Services Client Redirection

    - by caleban
    We have a lot of mobile/home users with different client side printers attached. Most of these users don't need to print on the client side and we don't want all of these users Terminal Services sessions trying to map their client side printers and we don't want all of these drivers on the Terminal Server. What is the best way to set up around 90 users to have no client side printer redirection and 10 users to have client side printer redirection (to the printers attached to their home computers)? Do I need to create two separate OU's in AD one for redirection and one for no redirection and create two different policies one for each OU? One GPO with Client Server data redirection Do not allow client printer redirection disabled and one enabled? Is it preferrable instead to change each user's AD User Properties Enviroment Client devices Connect client printers at logon setting? Is there any for me to direct "ALL HP Printers" to a single HP Universal Printer Driver, "ALL Canon Printers" to a single Canon Universal Printer Driver, etc without specifying hundreds of unique printer names in the printsub.inf file? Thanks in advance.

    Read the article

  • How do I effectively use WinSCP on my GoDaddy Dedicated Hosting

    - by Scott
    After being told that Virtual Private Servers would not fit the scope of my project, I have timidly entered the world of dedicated hosting. Unfortunately, this is forcing me how to learn the basics of being a Linux server admin. GoDaddy has a master account for the server. When you use SSH, they want you to use "su" to switch to the root user. Thus far, I have been able to do everything I have needed to thus far via the command line as this root user. However, now I need to upload files to my server. I'm used to using WinSCP to upload files. I can use my general server account to view the files but when I try to drag or create files its says that I cannot because I do not have permission to do so. I have researched the WinSCP documentation and it seems that this "su" function is beyond the scope of the program. How am I to grant myself access to upload these files using SSH? Should I create a user with the proper permissions? I'm happy to do this but thus far I have not been able to make sense of what I have found online. I'm going to try and move forward but any help and/or insight is appreciated.

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >