Search Results

Search found 133312 results on 5333 pages for 'using'.

Page 126/5333 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Hyper V Server 2012 Remote Management Using Workgroup

    - by Chris Kolenko
    I'm trying to remotely manage Hyper V server 2012 from a windows 8 pc, both client and server are on a workgroup. I've spent about 3-4 hours trying to get this working with no luck so far trying the following: Creating a new administrator on the server with the same details as the client ie. username / password. Add an entry into my hosts file to point to the remote ip by server name. Tried using HVRemote. Disabled both firewalls. The error that I'm getting is RPC Service Unavailable. How can I accomplish what I'm trying to do? Update Some of the operations on the Hyper-V Manager work. IE. Virtual Switch Works. I can open the New VM Wizard. I run into an error when creating a new Virtual Hard Disk tho. I've tried creating a VM without a hard disk, which works. Using the new hard disk wizard does not work either. I still can not see any Virtual Machines. RPC server unavailable. Unable to establish communication between 'ServerName' and 'ClientName'

    Read the article

  • Jetty - 401 Unauthorized when using basic authentication

    - by JP.
    I am running SOLR on jetty in Ubuntu (a bitnami VM, if that helps) and am trying to lock down access to both the admin pages and the update/delete/etc. pages using basic authentication. When I attempt to connect to the admin console via a web browser I am prompted for a user name and password, but the username and password I use simply does not work. For test purposes I am using foo:bar as the credentials, but I receive a '401 Unauthorized' response. I see the following in my request log. 127.0.0.1 - - [10/Nov/2013:05:35:46 +0000] "GET /solr/ HTTP/1.1" 401 1376 Am I doing something wrong and/or is there anything obviously incorrect with the below configuration? Any help is greatly appreciated. Jetty.xml <Call name="addBean"> <Arg> <New class="org.eclipse.jetty.security.HashLoginService"> <Set name="name">solr</Set> <Set name="config"><SystemProperty name="jetty.home" default="."/>/etc/realm.properties</Set> <Set name="refreshInterval">5</Set> </New> </Arg> </Call> /etc/realm.properties foo: bar, solr_admin webdefault.xml <security-constraint> <web-resource-collection> <url-pattern>/</url-pattern> </web-resource-collection> <auth-constraint> <role-name>solr_admin</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>solr</realm-name> </login-config>

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • Preseeding Ubuntu partman recipe using LVM and RAID

    - by Swav
    I'm trying to preseed Ubuntu 12.04 server installation and created a recipe that would create RAID 1 on 2 drives and then partition that using LVM. Unfortunately partman complains when creating LVM volumes saying there no partitions in recipe that could be used with LVM (in console it complains about unusable recipe). The layout I'm after is RAID 1 on sdb and sdc (installing from USB stick so it takes sda) and then use LVM to create boot, root and swap. The odd thing is that if I change the mount point of boot_lv to home the recipe works fine (apart from mounting in wrong place), but when mounting at /boot it fails I know I could use separate /boot primary partition, but can anybody tell me why it fails. Recipe and relevant options below. ## Partitioning using RAID d-i partman-auto/disk string /dev/sdb /dev/sdc d-i partman-auto/method string raid d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true #d-i partman-lvm/confirm boolean true d-i partman-auto-lvm/new_vg_name string main_vg d-i partman-auto/expert_recipe string \ multiraid :: \ 100 512 -1 raid \ $lvmignore{ } \ $primary{ } \ method{ raid } \ . \ 256 512 256 ext3 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext3 } \ mountpoint{ /boot } \ lv_name{ boot_lv } \ . \ 2000 5000 -1 ext4 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext4 } \ mountpoint{ / } \ lv_name{ root_lv } \ . \ 64 512 300% linux-swap \ $defaultignore{ } \ $lvmok{ } \ method{ swap } \ format{ } \ lv_name{ swap_lv } \ . d-i partman-auto-raid/recipe string \ 1 2 0 lvm - \ /dev/sdb1#/dev/sdc1 \ . d-i mdadm/boot_degraded boolean true #d-i partman-md/confirm boolean true #d-i partman-partitioning/confirm_write_new_label boolean true #d-i partman/choose_partition select Finish partitioning and write changes to disk #d-i partman/confirm boolean true #d-i partman-md/confirm_nooverwrite boolean true #d-i partman/confirm_nooverwrite boolean true EDIT: After a bit of googling I found below snippet of code from partman-auto-lvm, but I still don't understand why would they prevent that setup if it's possible to do manually and booting from boot partition on LVM is possible. # Make sure a boot partition isn't marked as lvmok if echo "$scheme" | grep lvmok | grep -q "[[:space:]]/boot[[:space:]]"; then bail_out unusable_recipe fi

    Read the article

  • Weird mouse/keyboard freezups when using PowerPoint 2007 with IBM/Lenovo docking station

    - by DanM
    I'm not sure what part of my system is responsible for this, but when using PowerPoint, I have problems when trying to resize drawing objects. I'll be dragging the handle and suddenly, the object will deselect and whatever is behind the object will select and start moving around. Next thing I know, the keyboard won't type anymore, and the only way to fix it is to unplug the USB and plug it back in. In case it's hardware related, I'm using an IMB Thinkpad T60P in a docking station. My keyboard is a Microsoft Natural Keyboard Pro. My OS is Windows XP SP3. I've never noticed this happening in anything besides PowerPoint, and I don't know anyone else who has this problem (even people with similar setups). Any ideas what it could be? Edit Well, it looks like I only get the problem if I plug the mouse into my docking station's USB. If I plug directly into the laptop's USB, everything works fine. And, again, this problem is only with PowerPoint. I tried playing with some drawing objects in Word and had no issue no matter where my mouse was plugged in. I should also mention I tried a different mouse (a standard Microsoft corded mouse instead of my Logitech trackball), but that made no difference. So, I don't think it's anything specific with the trackball or the trackball's driver. I tried searching Google but came up empty, so I'm guessing this problem is something unique to my setup. If you have any thoughts or ideas to try, I'd love to hear them.

    Read the article

  • Traffic Shaping using tc

    - by Simon
    Hi guys, I have a 1.5 Mbit/s link that i want to share with 150 users. My setup is the following: Linux box with 3 NICs eth0 - public ip eth1 - subnet A - 50 users (static ips) eth2 - subnet B - 100 users (via dhcp) I am using squid as a transparent proxy on port 3128. dhcp server using ports 67 and 68. I was creating, but I think packets are not going to the right queues #!/bin/bash DEV=eth0 RATE_MAIN=2048kbit CEIL_MAIN=2048kbit BURST=1b CBURST=1b RATE_DEFAULT=1024kbit CEIL_DEFAULT=$CEIL_MAIN PRIO_DEFAULT=3 RATE_P2P=1024Kbit CEIL_P2P=$CEIL_MAIN PRIO_P2P=4 RATE_IND=32kbit CEIL_IND=$CEIL_DEFAULT tc qdisc del dev $DEV root tc qdisc add dev $DEV root handle 1: htb default 30 tc class add dev $DEV parent 1: classid 1:1 htb rate $RATE_MAIN ceil $CEIL_MAIN tc class add dev $DEV parent 1:1 classid 1:10 htb rate $RATE_DEFAULT ceil $CEIL_MAIN burst $BURST cburst $CBURST prio $PRIO_WEB ## some other sub class for p2p other traffic tc class add dev $DEV parent 1:1 classid 1:20 htb rate $RATE_P2P ceil $CEIL_P2P burst $BURST cburst $CBURST prio $PRIO_P2P $IPS_NET1=50 $IPS_NET2=100 let $IPS=$IPS_NET1+$IPS_NET2 for ((i=1; i<= $IPS; i++)) do let CLASSID=($i+100) let HANDLE=($i+100) tc class add dev $DEV parent 1:10 classid 1:$CLASSID htb rate $RATE_IND ceil $CEIL_IND tc qdisc add dev $DEV parent 1:$CLASSID handle $HANDLE: sfq perturb 10 done ## Generate IP addresses ## IP_ADDRESSES="" # Subnet A BASE_IP=10.10.10. for ((i=2; i<=$IPS_NET1+1; i++)) do TEMP="$BASE_IP$i" IP=ADDRESSES="$IP_ADDRESSES $TEMP" done # Subnet B BASE_IP=192.168.0. for ((i=2; i<=$IPS_NET2+1; i++)) do TEMP="$BASE_IP$i" IP_ADDRESSES="$IP_ADDRESSES $TEMP" done ## FILTERS ## j=1 U32="tc filter add dev $DEV protocol ip parent 1:0 prio $PRIO_DEFAULT u32" for NET in $IP_ADDRESSES; do let CLASSID=($j+100) $U32_DEFAULT match ip src $NET/32 flowid 1:$CLASSID $U32_DEFAULT match ip dst $NET/32 flowid 1:$CLASSID let j=j+1 done Can you guys help me figure out what's wrong with it? basically I want my classes to be 1:1 (1.5 Mbit ) 1:10 (1024 Kbit) 1:20 (1024 Kbit) (200 ips each with 32 kbit)

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • Recommended setting for using Apache mod_mono with a different user

    - by Korrupzion
    Hello, I'm setting up an ASP.net script in my linux machine using mod_mono. The script spawn procceses of a bin that belongs to another user, but the proccess is spawned by www-data because apache runs with that user, and i need to spawn the proccess with the user that owns the file. I tried setuid bit but it doesn't make any effect. I discovered that if I kill mod-mono-server2.exe and I run it with the user that I need, everything works right, but I want to know the proper way to do this, because after a while apache runs mod-mono-server2.exe as www-data again. Mono-Project webpage says: How can I Run mod-mono-server as a different user? Due to apache's design, there is no straightforward way to start processes from inside of a apache child as a specific user. Apache's SuExec wrapper is targeting CGI and is useless for modules. Mod_mono provides the MonoStartXSP option. You can set it to "False" and start mod-mono-server manually as the specific user. Some tinkering with the Unix socket's permissions might be necessary, unless MonoListenPort is used, which turns on TCP between mod_mono and mod-mono-server. Another (very risky) way: use a setuid 'root' wrapper for the mono executable, inspired by the sources of Apache's SuExec. I want to know how to use the setuid wrapper, because I tried adding the setuid to 'mono' bin and changing the owner to the user that I want, but that made mono crash. Or maybe a way to keep running mono-mod-server2.exe separated from apache without being closed (anyone has a script?) My environment: Debian Lenny 2.6.26-2-amd64 Mono 1.9.1 mod_mono from debian repository Dedicated server (root access and stuff) Using apache vhosts -I use mono for only that script Thanks!

    Read the article

  • Not Able To Connect to Windows Server 2008 R2 using FileZilla Externally

    - by obautista
    I configured FTP Service/Role on my Windows Server 2008 R2 machine. I am able to connect from the inside, but not from the outside. On the inside I tested using cmd prompt and IE FTP. On the outside, I am testing with FileZilla and IE FTP. From the outside, IE FTP prompts me to enter my username/pwd, but nothing happens. Page eventually times out and I get "Internet Explorer cannot display page". Using FileZilla, I get the following messages. Note FileZilla resolved domain name and authenticates. I did not configure FTP Wirewall Support on the FTP site. I am not sure if I need to do this. I set up basic authentication, non-ssl, not allowing anonymous. I testing with Windows Firewall Turned off and on (I added windows firewall rule for port 21). On my network firewall (Cisco), I added a rule to forward port 21 traffic to FTP Server. Status: Resolving address of ftp.technologyblends.com Status: Connecting to 75.149.66.201:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER * Response: 331 Password required for . Command: PASS *** Response: 230 User logged in. Command: SYST Response: 215 Windows_NT Command: FEAT Response: 211-Extended features supported: Response: LANG EN* Response: UTF8 Response: AUTH TLS;TLS-C;SSL;TLS-P; Response: PBSZ Response: PROT C;P; Response: CCC Response: HOST Response: SIZE Response: MDTM Response: REST STREAM Response: 211 END Command: OPTS UTF8 ON Response: 200 OPTS UTF8 command successful - UTF8 encoding now ON. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing

    Read the article

  • How to access programs in one PC using another PC

    - by darkstar13
    Hi, I was recently given an old PC for my remote access at work. The CPU that comes with it has Windows XP installed, 400+ MB of ram, all USB devices disabled. I access my work applications using VPN / Citrix. Basically, it' sooooo slow. Plus it's bulky and it will just occupy space, so I am now hoping to find a way for me to integrate this work PC with my home PC. I tried to put in the hard drive in my home PC CPU, and set the drive as slave. However, when I booted my PC from this hard drive, I am stuck at the screen where windows is prompting me to select how am I going to boot (ex. Safe Mode, Safe mode with command prompt, Last Working Configuration, etc), but whatever option I select, I am still stuck at this option after reboot. I am thinking if maybe I can clone the drive and mount the cloned drive and access the system as a virtual machine. But I don't know if that will work. I would like to know if there's something I can do so I can work at home using my home PC, where I can access my work programs to connect to VPN / Citrix. My home PC's OS is Windows 7 Ultimate x64.

    Read the article

  • Using GitOAuthPlugin for Jenkins - not working as expected

    - by Blundell
    I need some clarity and maybe a fix. I'm using this plugin to authorise who views our Jenkins ci server: https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin As I understand it anyone who is auth'd to view one of our github project's can also login to our Jenkins box. This works I thought it would also allow the person logging in to only view the Project that they have GitHub permission on. For instance. Three projects on GitHub (A,B,C). Three builds on Jenkins. User 1 has Git access to all 3 projects (A B C). User 2 has Git access to only 1 project (A). When logging into Jenkins: User 1 can see all 3 projects ( this works ) User 2 can only see project A The problem is User 2 can also see all 3 projects when they should only see 1! Have I got this correct, and if so is this a bug? I have the settings set in Jenkins configuration Github Authorization Settings. Here we have some admin users. One organization. And none out of the 4 checkboxes ticked. (User 2, is not an admin, is not part of the org). The plugin is open sourced here: https://github.com/mocleiri/github-oauth-plugin I was trying to get Jenkins to print me the Logs from the plugin but I also failed at viewing these (to see if there was an issue). I followed these instructions: https://wiki.jenkins-ci.org/display/JENKINS/Logging It's the same concept as outlined below but using GitHub rather than manually selecting users: https://wiki.jenkins-ci.org/display/JENKINS/2012/01/03/Allow+access+to+specific+projects+for+Users%28Assigning+security+for+projects+in+Jenkins%29 Have I got this right or wrong? Is it possible to auth a Jenkins user to only see one project?

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Error applying iptables rules using iptables-restore

    - by John Franic
    Hi I'm using Ubuntu 9.04 on a VPS. I'm getting an error if I apply a iptables rule. Here is what I have done. 1.Saved the existing rules iptables-save /etc/iptables.up.rules Created iptables.test.rules and add some rules to it nano /etc/iptables.test.rulesnano /etc/iptables.test.rules This is the rules I added *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 22- j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT After editing when I try to apply the rules by iptables-restore < /etc/iptables.test.rules I get the following error iptables-restore: line 42 failed Line 42 is COMMIT and I comment that out I get iptables-restore: COMMIT expected at line 43 I'm not sure what is the problem, it is expecting COMMIT but if COMMIT is there it's giving error. Could it be due to the fact i'm usin a VPS?My provider using OpenVZ for virtualizaton.

    Read the article

  • Take snapshot of drawing using FingerPaint

    - by Rashmi.B
    I am using MyView for drawing content on a canvas using FingerPaint API demo app. I want to capture whatever I have written on the canvas. But when I use View v1 = myview.getRootView() it is returning only the blank canvas and not the content. I want to save my drawing in SDCard. Following is my code. Let me know what do i need to change v1 = myview.getRootView(); System.out.println("v1 value = "+v1); v1.buildDrawingCache(true); v1.measure(MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED), MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED)); //v1.layout(0, 0, v1.getMeasuredWidth(), v1.getMeasuredHeight()); v1.layout(0, 0, 100, 100); //Bitmap b = Bitmap.createBitmap(v1.getDrawingCache()); myview.mBitmap = Bitmap.createBitmap(v1.getDrawingCache()); System.out.println("BITMAP VALue = "+myview.mBitmap); ByteArrayOutputStream bytes = new ByteArrayOutputStream(); //b.compress(Bitmap.CompressFormat.JPEG, 40, bytes); File f = new File(Environment.getExternalStorageDirectory()+ File.separator + "rashmitest.jpg"); try { f.createNewFile(); FileOutputStream fo = new FileOutputStream(f); fo.write(bytes.toByteArray()); } catch (Exception e) { e.printStackTrace(); } v1.setDrawingCacheEnabled(false); myview is an object of class MyView that extends View.

    Read the article

  • Google Chrome does not launch after some months without using it in Windows XP/Seven

    - by kokbira
    Well, I have in computers I use with Windows XP or Seven more than one browser installed on them, generally Internet Explorer 8, Firefox 4, Opera 11 and Google Chrome. I often use Firefox, but I want to use Google Chrome sometimes because I have a lot of addons and webapps on it. The issue is: when I try to execute Chrome after some months without using it, it does not function. Using Proccess Explorer or Task Manager, I can see that there is not any Google proccesses running. Then I reinstall it and all functions. But if I do not use it for some months again, it will not function... Is it an update problem? Must I use Chrome everyday or is there another way to avoid that issue? PS: I installed English and Portuguese last versions (how to get the version numbers when it does not execute?), not at the same time, and it continues to do not launch... PS2: There is a running Google Update proccess that is launched in startup: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run    Name:            Google Update    Type:            REG_SZ    Data:            "C:\Users\Ubirajara\AppData\Local\Google\Update\GoogleUpdate.exe" /c

    Read the article

  • route propogation using OSPF in a network

    - by liv2hak
    I am using Juniper J-series routers to emulate a small telco and VPN customer.The internal routing will be configured with OSPF,MPLS including a default and backup path,RSVP for distributing labels withing the telco,OSPF for distributing routes from the customer edge (CE) routers to the VRF's in the adjacent PE's and finally iBGP for distributing customer routes between VRF's in different PEs. The topology of the network is shown below. The Addressing scheme for the network is as follows. UOW-TAU ******* ge-0/0/0 192.168.3.1 TAU-PE1 ******* ge-0/0/0 10.0.1.0 ge-0/0/1 10.0.2.0 ge-0/0/2 192.168.3.2 TAU-P1 ****** ge-0/0/0 172.16.1.0 ge-0/0/1 172.16.3.1 ge-0/0/2 10.0.2.2 HAM-P1 ****** ge-0/0/0 172.16.3.2 ge-0/0/1 172.16.2.1 ge-0/0/3 10.0.3.2 ACK-P1 ****** ge-0/0/0 172.16.1.2 ge-0/0/2 172.16.2.2 ge-0/0/3 10.0.1.2 HAM-PE1 ******* ge-0/0/0 10.0.3.1 ge-0/0/2 192.168.4.2 UOW-HAM ******* ge-0/0/0 192.168.4.1 I also set up loopback address for each node. I want to setup OSPF so that path to each internal subnet and router loopback address is propogated to all PE and P nodes.I also want to select a single area for PE and P nodes,and on each node I should add each interface that should be propogated. How do I accomplish this.? With my understanding below is the procedure to achieve this.Is the below explanation correct? I set up OSPF on UOW-TAU ge-0/0/0 interface and ge-0/0/1 interface and UOW-HAM ge-0/0/0 interface and ge-0/0/1 interface. let me call this Area 100. Once I have done this I should be able to reach each node from others using ping and traceroute. Any help is highly appreciated.

    Read the article

  • update all the servers through one virtual servers using Storage are network virtual machine

    - by Mr.Calm
    Using UBUNTU and Virtal Box by Oracle, and Using this script to start nginx in Virtual Box, and placing it in Virtual box inside~/init.d #!/bin/bash ### BEGIN INIT INFO # Provides: Testinit # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO # RETVAL=0; start() { CurrentTime=$(date +%d/%m/%Y"-"%I:%M:%S) ./usr/local/nginx/sbin/nginx echo "Current Time:"$CurrentTime>>/home/server/Desktop/NginxLogs.txt echo "!Starting nginx!" >>/home/server/Desktop/NginxLogs.txt Like this i want to write auto script (setup.sh file) and place that script in all virtual boxes inside my system, for example 8 virtual boxes and in all Virtual boxes NGINX is installed. Now, The thing is i am facing problem when i want change something in setup.sh i have to go to each and every virtual box, or Communicate each Virtual machine through SSH from my main machine. i am thinking to write another script (ex: Update.sh),and inside that script we give one path of file which is saved and recently edited in main machine (ex: DummySetup.sh). as soon as i run that script all the setup.sh files which are saved in each virtual machines should update the change or replace contents with DummySetup.sh's contents. Hope this is possible thing. Help would be appreciated.Thanking you

    Read the article

  • Problems sending email using .Net's SmtpClient

    - by Jason Haley
    I've been looking through questions on Stackoverflow and Serverfault but haven't found the same problem mentioned - though that may be because I just don't know enough about how email works to understand that some of the questions are really the same as mine ... here's my situation: I have a web application that uses .Net's SmtpClient to send email. The configuration of the SmtpClient uses a smtp server, username and password. The SmtpClient code executes on a server that has an ip address not in the domain the smtp server is in. In most cases the emails go without a problem - but not AOL (and maybe others - but that is one we know for sure right now). When I look at the headers in the message that was kicked back from AOL it has one less line than the successful messages hotmail gets: AOL Bad Message: Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Mon, 18 Jun 2012 09:48:24 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Good Hotmail Message: Received: from mail.domainofsmtp.com ([###.###.###.###]) by subdomainsof.hotmail.com with Microsoft SMTPSVC(6.0.3790.4900); Thu, 21 Jun 2012 09:29:13 -0700 Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Thu, 21 Jun 2012 11:29:03 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Notice the hotmail message headers has an additional line. I'm confused as to why the Web server's name and ip address are even in the headers since I thought I was using the SmtpClient to go through the smtp server (hence the need for the username and password of a valid email box). I've read about SPFs, DKIM and SenderID's but at this point I'm not sure if I would need to do something with the web server (and its ip/domain) or the domain the smtp is coming from. Has anyone had to do anything similiar before? Am I using the smtp server as a relay? Any help on how to describe what I'm doing would also help.

    Read the article

  • using pf for packet filtering and ipfw's dummynet for bandwidth limiting at the same time

    - by krdx
    I would like to ask if it's fine to use pf for all packet filtering (including using altq for traffic shaping) and ipfw's dummynet for bandwidth limiting certain IPs or subnets at the same time. I am using FreeBSD 10 and I couldn't find a definitive answer to this. Googling returns such results as: It works It doesn't work Might work but it's not stable and not recommended It can work as long as you load the kernel modules in the right order It used to work but with recent FreeBSD versions it doesn't You can make it work provided you use a patch from pfsense Then there's a mention that this patch might had been merged back to FreeBSD, but I can't find it. One certain thing is that pfsense uses both firewalls simultaneously so the question is, is it possible with stock FreeBSD 10 (and where to obtain the patch if it's still necessary). For reference here's a sample of what I have for now and how I load things /etc/rc.conf ifconfig_vtnet0="inet 80.224.45.100 netmask 255.255.255.0 -rxcsum -txcsum" ifconfig_vtnet1="inet 10.20.20.1 netmask 255.255.255.0 -rxcsum -txcsum" defaultrouter="80.224.45.1" gateway_enable="YES" firewall_enable="YES" firewall_script="/etc/ipfw.rules" pf_enable="YES" pf_rules="/etc/pf.conf" /etc/pf.conf WAN1="vtnet0" LAN1="vtnet1" set skip on lo0 set block-policy return scrub on $WAN1 all fragment reassemble scrub on $LAN1 all fragment reassemble altq on $WAN1 hfsc bandwidth 30Mb queue { q_ssh, q_default } queue q_ssh bandwidth 10% priority 2 hfsc (upperlimit 99%) queue q_default bandwidth 90% priority 1 hfsc (default upperlimit 99%) nat on $WAN1 from $LAN1:network to any -> ($WAN1) block in all block out all antispoof quick for $WAN1 antispoof quick for $LAN1 pass in on $WAN1 inet proto icmp from any to $WAN1 keep state pass in on $WAN1 proto tcp from any to $WAN1 port www pass in on $WAN1 proto tcp from any to $WAN1 port ssh pass out quick on $WAN1 proto tcp from $WAN1 to any port ssh queue q_ssh keep state pass out on $WAN1 keep state pass in on $LAN1 from $LAN1:network to any keep state /etc/ipfw.rules ipfw -q -f flush ipfw -q add 65534 allow all from any to any ipfw -q pipe 1 config bw 2048KBit/s ipfw -q pipe 2 config bw 2048KBit/s ipfw -q add pipe 1 ip from any to 10.20.20.4 via vtnet1 out ipfw -q add pipe 2 ip from 10.20.20.4 to any via vtnet1 in

    Read the article

  • 401 Using Multiple Authentication methods IE 10 only

    - by jon3laze
    I am not sure if this is more of a coding issue or server setup issue so I've posted it on stackoverflow and here... On our production site we've run into an issue that is specific to Internet Explorer 10. I am using jQuery doing an ajax POST to a web service on the same domain and in IE10 I am getting a 401 response, IE9 works perfectly fine. I should mention that we have mirrored code in another area of our site and it works perfectly fine in IE10. The only difference between the two areas is that one is under a subdomain and the other is at the root level. www.my1stdomain.com vs. portal.my2nddomain.com The directory structure on the server for these are: \my1stdomain\webservice\name\service.aspx \portal\webservice\name\service.aspx Inside of the \portal\ and \my1stdomain\ folders I have a page that does an ajax call, both pages are identical. $.ajax({ type: 'POST', url: '/webservice/name/service.aspx/function', cache: false, contentType: 'application/json; charset=utf-8', dataType: 'json', data: '{ "json": "data" }', success: function() { }, error: function() { } }); I've verified permissions are the same on both folders on the server side. I've applied a workaround fix of placing the <meta http-equiv="X-UA-Compatible" value="IE=9"> to force compatibility view (putting IE into compatibility mode fixes the issue). This seems to be working in IE10 on Windows 7, however IE 10 on Windows 8 still sees the same issue. These pages are classic asp with the headers that are being included, also there are no other meta tags being used. The doctype is being specified as <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//" "http://www.w3.org/TR/html4/loose.dtd"> on the portal page and <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> on the main domain. UPDATE1 I used Microsoft Network Monitor 3.4 on the server to capture the request. I used the following filter to capture the 401: Property.HttpStatusCode.StringToNumber == 401 This was the response - Http: Response, HTTP/1.1, Status: Unauthorized, URL: /webservice/name/service.aspx/function Using Multiple Authetication Methods, see frame details ProtocolVersion: HTTP/1.1 StatusCode: 401, Unauthorized Reason: Unauthorized - ContentType: application/json; charset=utf-8 - MediaType: application/json; charset=utf-8 MainType: application/json charset: utf-8 Server: Microsoft-IIS/7.0 jsonerror: true - WWWAuthenticate: Negotiate - Authenticate: Negotiate WhiteSpace: AuthenticateData: Negotiate - WWWAuthenticate: NTLM - Authenticate: NTLM WhiteSpace: AuthenticateData: NTLM XPoweredBy: ASP.NET Date: Mon, 04 Mar 2013 21:13:39 GMT ContentLength: 105 HeaderEnd: CRLF - payload: HttpContentType = application/json; charset=utf-8 HTTPPayloadLine: {"Message":"Authentication failed.","StackTrace":null,"ExceptionType":"System.InvalidOperationException"} The thing here that really stands out is Unauthorized, URL: /webservice/name/service.aspx/function Using Multiple Authentication Methods With this I'm still confused as to why this only happens in IE10 if it's a permission/authentication issue. What was added to 10, or where should I be looking for the root cause of this? UPDATE2 Here are the headers from the client machine from fiddler (server information removed): Main SESSION STATE: Done. Request Entity Size: 64 bytes. Response Entity Size: 9 bytes. == FLAGS ================== BitFlags: [ServerPipeReused] 0x10 X-EGRESSPORT: 44537 X-RESPONSEBODYTRANSFERLENGTH: 9 X-CLIENTPORT: 44770 UI-COLOR: Green X-CLIENTIP: 127.0.0.1 UI-OLDCOLOR: WindowText UI-BOLD: user-marked X-SERVERSOCKET: REUSE ServerPipe#46 X-HOSTIP: ***.***.***.*** X-PROCESSINFO: iexplore:2644 == TIMING INFO ============ ClientConnected: 14:43:08.488 ClientBeginRequest: 14:43:08.488 GotRequestHeaders: 14:43:08.488 ClientDoneRequest: 14:43:08.488 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 14:40:28.943 FiddlerBeginRequest: 14:43:08.488 ServerGotRequest: 14:43:08.488 ServerBeginResponse: 14:43:08.592 GotResponseHeaders: 14:43:08.592 ServerDoneResponse: 14:43:08.592 ClientBeginResponse: 14:43:08.592 ClientDoneResponse: 14:43:08.592 Overall Elapsed: 0:00:00.104 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] Portal SESSION STATE: Done. Request Entity Size: 64 bytes. Response Entity Size: 105 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-EGRESSPORT: 44444 X-RESPONSEBODYTRANSFERLENGTH: 105 X-CLIENTPORT: 44439 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#7 X-HOSTIP: ***.***.***.*** X-PROCESSINFO: iexplore:7132 == TIMING INFO ============ ClientConnected: 14:37:59.651 ClientBeginRequest: 14:38:01.397 GotRequestHeaders: 14:38:01.397 ClientDoneRequest: 14:38:01.397 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 14:37:57.880 FiddlerBeginRequest: 14:38:01.397 ServerGotRequest: 14:38:01.397 ServerBeginResponse: 14:38:01.464 GotResponseHeaders: 14:38:01.464 ServerDoneResponse: 14:38:01.464 ClientBeginResponse: 14:38:01.464 ClientDoneResponse: 14:38:01.464 Overall Elapsed: 0:00:00.067 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2]

    Read the article

  • Accidentally deleting all OSX users using dscl

    - by gutch
    OK, so I just did something really stupid and deleted all the user accounts on an OSX 10.6.6 machine by running this: sudo dscl . -delete /users What I actually wanted to do was delete a single, troublesome account using a command like this: sudo dscl . -delete /users/localadmin ...but I absent-mindedly pressed return too early and deleted the lot. I've tried using -list and can confirm that I have indeed wiped all the accounts. The machine is currently running fine, but I'm sure that once I log out / reboot then it will be completely broken. I don't mind that I've deleted the normal user accounts (there was only one I wanted anyway). But it's surely going to be a big problem that system accounts like _installer and _jabber and _lda and _windowserver etc etc are gone. So my question is, how can I restore the standard set of system accounts? Do I have to reinstall OSX from scratch? Or can I either: undelete those system accounts, or run some command to recreate the system accounts?

    Read the article

  • Wireless router blocking some sites while using ethernet is fine

    - by Micke
    I'm using Windows 7 and my router is a wireless Apple Airport Express that is approximately two years old. Suddenly I can't access some sites (for example www.sthlm.friskissvettis.se, or www.vegetarian-shoes.co.uk, some streamed tv-shows on svtplay.se, and a number of other random sites) when connecting to internet with my router. It worked good until recently and I'm fairly sure this problem emerged when my ISP upgraded from 10/10mbit to 100/10mbit speed. Most other sites like facebook and google works fine. When using my network cable to connect to internet everything works fine and I can access these sites. Firmware is current and I've tried reseting the router to factory defaults. Tried different browsers, and I can't ping the "blocked" sites either. Tracert www.sthlm.friskissvettis.se starts with 10.0.0.1 and continues through a number of long addresses until it says timeout. The last working address before timeout was sth-tcy-ipcore01-ge-0-2-0.neq.dgcsystems.net [83.241.252.13], if it matters. Tracert www.vegetarian-shoes.co.uk also eventually gives me a timeout. When the network cable is plugged in, I still get timeout on tracert www.sthlm.friskissvettis.se even though I can access the site in Chrome. Weird. www.vegetarian-shoes.co.uk doesn't give me a tracert timeout when the cable is plugged in, and I can access the site as usual. I've tried changing DNS servers to use opendns servers instead, but to no use. I've tried pinging these two sites with a lower MTU packet size (with this method: http://www.richard-slater.co.uk/archives/2009/10/23/change-your-mtu-under-vista-or-windows-7/), but still can't access them through ping... I don't know what to do anymore.... any suggestions???

    Read the article

  • Google Chrome - Issues with download dialogs using BinaryWrite

    - by Mila
    Hello, I have an empty ASP .NET page with the code behind to support downloading of PDF files. The page is called from a link-like web control that has NavigateUrl set to this page. In short, I am using the following for streaming: Response.Buffer = false; Response.ClearHeaders(); Response.ContentType = "application/x-pdf"; Response.AddHeader("Content-Disposition","attachment; filename="MyPDFFile.pdf"); byte[] binary = (dataReaderRemote[DataPDFFieldName]) as byte[]; //dataReaderRemote[DataPDFFieldName] has previously retrieved data if (binary != null) { MemoryStream memoryStream = new MemoryStream(binary); int sizeToWrite = CHUNKSIZE; //CHUNKSIZE=1024 for (int i = 0; i < binary.GetUpperBound(0) - 1; i = i + CHUNKSIZE) { if (!Response.IsClientConnected) return; if (i + CHUNKSIZE >= binary.Length) sizeToWrite = binary.Length - i; byte[] chunk = new byte[sizeToWrite]; memoryStream.Read(chunk, 0, sizeToWrite); Response.BinaryWrite(chunk); Response.Flush(); } } Response.Close(); IE as well as Firefox bring the download prompt window asking you whether you wish to open or save the file, while the user remains on the same page containing the link. However, Google Chrome opens a new blank tab and downloads the file automatically. Is there any way to prevent Chrome from opening the extra blank and therefore useless tab? I am using the Google Chrome version 5.0.375.55 (Official Build 47796) on Windows XP. Thanks in advance! Mila

    Read the article

  • Using AddEncoding x-gzip .gz without actual files

    - by STATUS_ACCESS_DENIED
    With Apache (2.2 and later) how can I achieve the following. I want to transparently compress using GZip encoding (not plain Deflate) the output when a certain file is queried with its name plus the extension .gz, where the .gz version doesn't physically exist on disk. So let's say I have a file named /path/foo.bar and no file foo.bar.gz in the folder to which the URI /path maps, how can I get Apache to serve the contents of /path/foo.bar but with AddEncoding x-gzip ... applied to the (non-existing) file? The rewrite part appears to be easy, but the problem is how to apply the encoding to a non-existent item. The other way around also seems to be simple as long as the client supports the encoding. Is the only solution really a script that does this on the fly? I'm aware of mod_deflate and mod_gzip and it is not what I'm looking for - at least not alone. In particular I need an actual GZIP file and not just a deflated stream. Now I was thinking of using mod_ext_filter, but couldn't bridge the gap between rewriting the name of the (non-existent) file.gz to file on one side and the LocationMatch on the other. Here's what I have. RewriteRule ^(.*?\.ext)\.gz$ $1 [L] ExtFilterDefine gzip mode=output cmd="/bin/gzip" <LocationMatch "/my-files/special-path/.*?\.ext\.gz"> AddType application/octet-stream .ext.gz SetOutputFilter gzip Header set Content-Encoding gzip </LocationMatch> Note that the header for Content-Encoding isn't really needed by the clients in this case. They expect to see actual GZIP files, but I want to do this on-the-fly without caching (this is a test scenario).

    Read the article

  • Can't connect to MS SQL Server database using SSMS

    - by Charles
    I have a database on line with Godaddy (who uses SQL Server 2005). They provide basic management tools, but tell you that for more advanced tools you can connect directly using SSMS. I followed their instructions to ensure my online database will accept remote connections, and can apparently log in using SSMS with success (after giving my hostname and access data). However: Now from in SSMS, when attempting to expand the "Databases" folder tree, I get the following error: Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc) An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) The server principal "cmitchell" is not able to access the database "3pointdb" under the current security context. (Microsoft SQL Server, Error: 916) The irony is that 3pointdb isn't my database. It is just another in a long list of databases that show up when I access my Godaddy backend. From SSMS, I selected the default database to be the name of my database, which it did locate on the list when I browsed. Still same error message. It is trying to connect to a database that isn't mine! :( Godaddy support, after a bit of testing, said the problem isn't on their end. it's on mine. – Charles

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >