Search Results

Search found 11587 results on 464 pages for 'pseudo random numbers'.

Page 353/464 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Understanding Exchange User Monitor (ExMon) Output

    - by SturdyErde
    I recently downloaded and ran ExMon while trying to troubleshoot Outlook connectivity problems due to high CPU usage on Exchange Server 2010 SP2 UR8. The tool provides a great set of data, but I have not yet figured out how to make great use of it. My first question is why the Exchange Server itself shows up as a high-use MAPI client in the ExMon data. Among the users' client versions I see build numbers listed for Outlook 2013, 2010, and yes, even 2007 clients. I also see build number 14.2.387.0, which represents Exchange Server 2010 SP2 Update Rollup 8 (+/- some other patch that makes it not quite match the UR8 number). There are many user rows that list only "::1" and/or the short hostname of my Exchange server in the 'Client IP Addresses' column. Some other columns include the end-user's actual IP address and the Exchange server's IP address. ExMon shows that it is actually Exchange Server that is utilizing the highest percentage of CPU that is used for MAPI calls. I had expected to see 1 IP address and version number for each user reported by ExMon. Instead, most records show multiple version #'s (Exchange ver and Outlook ver) and multiple IPs (Exchange IP and client IP). Can anyone explain the reason for this to me, please?

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • Laptop Acer Travelmate 4050 takes over 10 Mins to POST

    - by Belliez
    Hi, I am a computer tech and have received a laptop for repair. I noticed when I turned it on the laptop would not do anything for a min or two (the fan would run up and stop, power led would shine and some cd rom activity then stop). It would sit there with a black screen. Suddenly after a random number of minutes (between 1-20mins!) the Acer BIOS screen would display and POST would happen before booting into Windows XP. It has frozen in XP at various times and pointed towards a CPU fault and over heating. The fan was on its last legs, sounded like a car engine, so I replaced this. Still same issues. I next replaced the CPU like for like. Same problems. Also applied new thermal paste between the cpu and heatsink, when running the fan kicks in occasionally (not as often as I thought it would) and I left it playing mp3, online radio and updating to service pack 3 and it wouldnt freeze. shutting down ok, cold start, not ok. Waits again before showing the BIOS screen. The hard disk was also making a screaming noise (SMART test and chkdsk passed) but I also replaced this. The laptop powers up with and without the battery so dont think its a battery issue. Running out of ideas and wondered if anyone had any advice. Thanks

    Read the article

  • virtual disk image - file or partition

    - by tylerl
    I'm looking at the differences between using a file versus a partition to store a virtual disk image in VM use. The common knowledge is that partition-based images are faster than file-based images because of a decreased overhead. It makes sense, but I've never seen any actual numbers. My own testing bears out a different result. When I benchmark a direct-to-partition virtual disk, then format that same partition with ext4, create a virtual disk image stored on that ext4 filesystem, and then benchmark that, I see no speedup at all for the direct-to-partition virtual disk. Instead on some systems the file-based image is even faster (possibly due to host OS caching or something like that). This test was repeated many times on many systems, with fairly consistent results. So perhaps throwing out the performance justification, is it still considered better to use a partition rather than a virtual disk image? Is there some other reason why direct partition access is better than image files? Or perhaps is there some reason to go the other way around? Perhaps an advantage in one of the virtual disk file formats that you don't get with raw partition images?

    Read the article

  • Ubuntu 12.04 - HP ProLiant DL380 G4 - Load Maxes Out / Unresponsive

    - by Brian S
    Trying to post this question on here. I've posted it on the Ubuntu forums as well with no replies. Recently I upgraded an HP ProLiant DL380 G4 server from Ubuntu 10.04 server to Ubuntu 12.04 server. Upon doing so, the server will not - at random times - get to a load of 400+ and then become completely non-responsive. I use an SNMP graphing program (cacti) and the load steadily increases by about 10 every five minutes until it gets over 400 and the graphing stops. The graphs may not be accurate, but the CPU load averages about 3% before this happens - and right when the load starts increasing, it jumps to about 25% for 15 minutes and dramatically dips down to less than 1% (about 0.3%) until the graphing stops. I'm not able to open a SSH tunnel to the server to do anything. I've checked the /var/log/syslog and all logging stops at that time as well - with nothing else in there. The odd thing is - the server still responds to DNS queries for the zones it is authoritative on during this time - and at normal speed. Just not sure what the next step would be in order to find out what is going on - and how this issue can be corrected. The server cannot stay with Ubuntu 10.04 Server and needs to stay upgraded.

    Read the article

  • FTP Server on Centos 5.8 - Transfer fails randomly

    - by Diego
    Hi have ProFTPD runningon a brand new CentOS 5.8 server with Plesk, and its behaviour is inconsistent at best. I tried to transfer a directory from my PC, and every time I get a transfer failed on a random file. It's never the same one that fails, it just fails. Sometimes it's a .gif, sometimes it's a .css, sometimes it's a JPG. Of several hundred files, a dozen is always failing for no apparent reason. The error that I get is the following: COMMAND:> [27/11/2012 11:43:52] STOR main_border.gif [27/11/2012 11:43:53] 500 Invalid command: try being more creative ERROR:> [27/11/2012 11:43:53] Syntax error: command unrecognized. The above is just an example, the "command unrecognized" occurs with LIST and other commands as well. Here's the ProFTPD configuration, just in case: ServerName "ProFTPD" #ServerType standalone ServerType inetd DefaultServer on <Global> DefaultRoot ~ psacln AllowOverwrite on </Global> DefaultTransferMode binary UseFtpUsers on TimesGMT off SetEnv TZ :/etc/localtime Port 21 Umask 022 MaxInstances 30 ScoreboardFile /var/run/proftpd/scoreboard TransferLog /usr/local/psa/var/log/xferlog #Change default group for new files and directories in vhosts dir to psacln <Directory /var/www/vhosts> GroupOwner psacln </Directory> # Enable PAM authentication AuthPAM on AuthPAMConfig proftpd IdentLookups off UseReverseDNS off AuthGroupFile /etc/group Include /etc/proftpd.include Note: file /etc/proftpd.include is blank. The above is the default configuration set by Plesk 11. I don't know much of why is that way, my knowledge of Linux System Administration is very basic and the one of ProFTPD is a complete zero. Thanks in advance for the help. Update Issue experienced with CuteFTP and FileZilla. Update Replaced ProFTPd with PureFTPd, issue persists. Sometimes I get "command unrecognized", sometimes "failed to establish data connection". I'm starting to think that it could be a network issue, but I have completely zero knowledge of networking.

    Read the article

  • Representing server state with a metric

    - by Sal
    I'm using Microsoft's Performance Monitor to dump logs of RAM, CPU, network, and disk usage from multiple servers. I'd like to get a single metric that captures the state of a given variable to a good extent. For instance, disk usage is pretty stable, so if I take a single reading that says I have 50% remaining disk space, that reading will give me an accurate measure for the day. (The servers aren't doing heavy IO writing.) However, the tricky part here is monitoring CPU and network usage. The logs currently dump the % CPU usage every ten seconds. If I take a straight average of the numbers, it may not represent reality, as % CPU will be much lower during the night than day. (We host websites that sell appliance items.) I'd like to get an average over a span during peak hours (about 5 hours in the day) and present a daily peak hour metric. Of course, there are most likely some readings that will come in as overly spiked (if multiple users pinged the server at once) or no use (a momentary idle state). Is there a standard distribution/test industries use in these situation?

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • Suggestions on the best home server rack cabinet

    - by allentown
    I have a lot of gear in a colocation facility right now. Some of it is going to come home with me now. I do not know anything about the "rack mount" side of the industry. I lease a rack, and I put my stuff in it. I have a few 1U boxes, a few 2U boxes, and a few 4U boxes. 1U switch. One is a new Xserve, which means it is deep. I think I can get by with around 12U to 18U. I want to keep it as small as possible, since I do not have a lot of spare space at my home. I will not be able to bolt to the wall, floor etc, so it should not be tall. This is something I would love to more or less just be a box that sits on the floor but gives me the ability to mount nicely, do nice cable management etc. Are the "post" style racks junk? I am liking the open space, and the no limitations on depth of something like this: http://www.rackmountsolutions.net/images/products/Martin-relay-rack.jpg However, that thing is way too tall, and probably way too expensive. I am looking to be around $300.00 or less. More if I have to, though I would prefer not to. These look near perfect: (See comment for this link, the system will not let me post a second url) but I am worried the Xserve will not fit in it. If anyone has any good links, or website recommendations of good past experience, I would appreciate it. I am almost considering that I may be able to build something with random scraps of stuff at Home Depot as well.

    Read the article

  • Simulated NAT Traversal on Virtual Box

    - by Sumit Arora
    I have installed virtual box ( with Two virtual Adapters(NAT-type)) - Host (Ubuntu -10.10) - Guest-Opensuse-11.4 . Objective : Trying to simulate all four types of NAT as defined here : https://wiki.asterisk.org/wiki/display/TOP/NAT+Traversal+Testing Simulating the various kinds of NATs can be done using Linux iptables. In these examples, eth0 is the private network and eth1 is the public network. Full-cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source iptables -t nat -A PREROUTING -i eth0 -j DNAT --to-destination Restricted cone iptables -t nat POSTROUTING -o eth1 -p tcp -j SNAT --to-source iptables -t nat POSTROUTING -o eth1 -p udp -j SNAT --to-source iptables -t nat PREROUTING -i eth1 -p tcp -j DNAT --to-destination iptables -t nat PREROUTING -i eth1 -p udp -j DNAT --to-destination iptables -A INPUT -i eth1 -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p udp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p tcp -m state --state NEW -j DROP iptables -A INPUT -i eth1 -p udp -m state --state NEW -j DROP Port-restricted cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source Symmentric echo "1" /proc/sys/net/ipv4/ip_forward iptables --flush iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE --random iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT What I did : OpenSuse guest with Two Virtual adapters - eth0 and eth1 -- eth1 with address 10.0.3.15 /eth1:1 as 10.0.3.16 -- eth0 with address 10.0.2.15 now running stund(http://sourceforge.net/projects/stun/) client/server : Server eKimchi@linux-6j9k:~/sw/stun/stund ./server -v -h 10.0.3.15 -a 10.0.3.16 Client eKimchi@linux-6j9k:~/sw/stun/stund ./client -v 10.0.3.15 -i 10.0.2.15 On all Four Cases It is giving same results : test I = 1 test II = 1 test III = 1 test I(2) = 1 is nat = 0 mapped IP same = 1 hairpin = 1 preserver port = 1 Primary: Open Return value is 0x000001 Q-1 :Please let me know If any has ever done, It should behave like NAT as per description but nowhere it working as a NAT. Q-2: How NAT Implemented in Home routers (Usually Port Restricted), but those also pre-configured iptables rules and tuned Linux

    Read the article

  • Java VM problem in OpenVZ

    - by Ginnun
    Hi, I bought a vps for hosting my java needs. But I can't run java on it. Everything about java is correctly installed but when I try to run java ("java -version" forexample) I get this error : Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I don't think this is a java centered problem, Out of memory for sure. I contacted the vps admin, but he says everything is fine, you have 2gb ram, expandable to 4gb! I did a bit search on the subject, here is my BEANS file (numbers converted to humanredable form using a script). By the way do JVM heap memory allocs count on kmemsize or privvmpages ? How much ram does that configuration allows me to allocate with jvm for a single process? resource held maxheld barrier limit failcnt kmemsize 2.25 mb 2.35 mb 13.71 mb 14.10 mb 0 lockedpages 0 0 1024.00 kb 1024.00 kb 0 privvmpages 20.54 mb 21.33 mb 256.00 mb 272.00 mb 156 shmpages 5.00 mb 5.00 mb 84.00 mb 84.00 mb 0 numproc 13 14 240 240 0 physpages 9.36 mb 9.45 mb 0 MAX_ULONG 0 vmguarpages 0 0 132.00 mb MAX_ULONG 0 oomguarpages 9.36 mb 9.45 mb MAX_ULONG MAX_ULONG 0 numtcpsock 3 3 360 360 0 numflock 3 3 188 206 0 numpty 2 2 16 16 0 numsiginfo 0 1 256 256 0 tcpsndbuf 69.17 kb 69.17 kb 1.64 mb 2.58 mb 0 tcprcvbuf 48.00 kb 48.00 kb 1.64 mb 2.58 mb 0 othersockbuf 6.80 kb 6.80 kb 1.07 mb 2.00 mb 0 dgramrcvbuf 0.00 kb 0.00 kb 256.00 kb 256.00 kb 0 numothersock 9 10 360 360 0 dcachesize 0.00 kb 0.00 kb 3.25 mb 3.46 mb 0 numfile 704 746 9312 9312 0 numiptent 10 10 128 128 0 Thanks in advance!

    Read the article

  • What to look for in a reliable backup hard disk?

    - by Senthil
    I want to buy an internal hard disk and use a docking station along with it for backing up important data. The size will be around 500GB to 1TB. I have a budget and several models fit into it. So far, they only seem to vary in size, speed and brand. These are the only things I can compare from the specs. I guess asking for which brand is best is completely subjective so I won't do that. I want my disk to have long life and be reliable. Doesn't matter if it is somewhat slow. Size: Should I go for the one with highest size within my budget? Will higher density cause problems? Or should I go for a moderately sized one? Does the number of platters have an impact? Speed: I do not want high performance. I want it to be reliable and last long. I am definitely not going to choose the expensive 10,000 rpm ones. Should I go for 5400 or 7200? Do these numbers affect longevity and reliability? Are there any other technical and objective factors that I should look for?

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • 10Gbe sfp+ Cross Over Cable required? Is there such a thing?

    - by dc-patos
    To preface, this is my first experience with 10GBe networking and I have encountered an issue which research does not seem to document a solution for... I have two servers (older DL580G5 and DL380G5), each with a HP NC522SFP 10Gbe dual sfp+ port adapter. I have purchased copper "passive" direct connect adapter cables (which look like twinax), which seem to work well when I connect them to the sfp+ ports on my Dell 5524 switch. However, if I directly connect the two servers with the same cable, the link doesn't come up. I am running WS2012 standard on each server. My intention is to use one of these servers as a home brew SAN and I would like to enable mutiple 10Gbe paths for iSCSI traffic. My question(s): Can I connect the two adapters to each other, such as I would with other less speedy generations of ethernet? If I can, do I require a crossover cable, or some type of other sfp+ cable solution to do this? My 10Gbe sfp+ switch ports are premium, but server to server connections are doable in small numbers for me and I would really like the multiple paths this would give me. Is there a simple solution?

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • Reliable applicance for routing IT emergency calls (SIP and ISDN)

    - by chiborg
    We have a fairly big IT installation and our IT staff needs to be reachable 24/7. At the moment we have the following setup for "emergency" calls to our IT staff on our main Asterisk box: An incoming emergency number (connected via SIP trunk and a BRI card in case the SIP trunk goes down). When the number is called during the office hours, all the SIP phones of the IT staff are called simultaneously. When the number is called out of office hours interface, a list of mobile phone numbers is called, one after another until someone picks up. The list can be changed by the IT staff via command line script. The setup works well, but the Asterisk is heavily used in a call center, has experienced some outages and misconfigurations, each of them bringing down the IT emergency number. So we'd like to put the IT emergency call functionality on a separate device. This does not need to be a big server, it even does not need to be Asterisk, it only has one purpose and should do it reliably. It should be very low-maintenance. Any suggestions for hard- and software?

    Read the article

  • "Error: Unknown error" when trying to start virtual machine from VMware server

    - by slhck
    Problem We are running VMware Server 2.0.0 build-116503 on a Ubuntu 10.04 LTS server. There is a virtual machine installed, running Lotus Domino on Windows Server 2003. Ever since a sudden power failure last week, the virtual machine won't properly start up. When I run the command: vmrun -T server -h https://127.0.0.1:8333/sdk -u root -p jk2x2208 start "[standard] lotus/test.vmx" … after 30 seconds it displays: Error: Unknown error That's about everything I get. I know the command is right, since that's what we've used all the time. This has happened last Saturday after a scheduled backup shutdown, and somehow I was able to start it again. This week, it happened again, and I can't get it back up. Occasionally, I also get: Error: Cannot connect to the virtual machine When I get this, and I run the start command, it seemingly works. Why is this so random? Which configuration could have been messed up? What I've tried / other info I already shut down VMware itself with /etc/init.d/vmware stop. This works. I tried to start VMware again with /etc/init.d/vmware start. It complains that it's "not configured", which is why I had to rm /etc/vmware/not_configured, and then try to start again. There have been no software updates on the machine, and no configuration changes

    Read the article

  • Allow incoming connections on Windows Server 2008 R2

    - by Richard-MX
    Good day people. First, im new to Windows Server. I've always used Linux/Apache combo, but, my client has and AWS EC2 Windows Server 2008 R2 instance and he wants everything in there. Im working with IIS and PHP enabled as Fast-CGI and everything is working, but, i cant see the websites stored in it from internet. The public DNS that AWS gave us for that instance is: http://ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com/ But, if i copy paste that address, i get nothing, no IIS logo or something like that. My common sense tells me that maybe the firewall could be blocking the access. Can anyone help me and tell where to enable some rules to get this thing working? I don't wanna start enabling rules at random and make the system insecure. If you need any additional info, you can ask me and i will provide it. Thanks in advance. UPDATE: Amazon EC2 display this: Public DNS: ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com Private DNS: ip-XX-XXX-XX-252.us-west-2.compute.internal Private IPs: XX.XXX.XX.25 In my test microinstance, i just to use the Public DNS address (the one that starts with "ec2") and it works like a charm (of course, the micro instance have its own Public DNS im not assuming same address for both instances...) However, for the large instance, i tried to do the same. Set up everything as in the micro instance but if i use the Public DNS, it doesnt load anything. Im suspicious about the Windows Firewall, but, the HTTP related stuff is enabled. What should i do to get access to the large instance? I don't want to set up the domain yet, i want access from an amazon url. 2ND EDIT: all fixed. Charles pointed that maybe Security Groups was not properly set up for the instance. He was right. Just added HTTP service to the rules and all works good.

    Read the article

  • Problem restoring from tar backup: why are there /dev/disk/by-id/ symlinks and how can I avoid them?

    - by SK.
    Hello, I'm trying to make a bare-bone backup system with the most basic tools available on openSUSE 11.3 (in this case: bash, fdisk, tar & grub legacy) Here's the workflow for my scripts: backup.sh: (Run from external system, e.g. LiveCD) make an fdisk script ($fscript) from fdisk -l's output [works] mount the partitions from the system's fstab [works] tar the crucial stuff in file.tgz [works] restore.sh: (Run from external system, e.g. LiveCD) run fdisk $dest < $fscript to restore partitioning [works] format and mount partitions from system's fstab [fails] extract from file.tgz [works when mounting manually] restore grub [fails] I have recently noticed that openSUSE (though I'm sure it has nothing to do with the distro) has different output in /etc/fstab and /boot/grub/menu.lst, more precisely the partition name is for example "/dev/disk/by-id/numbers-brandname-morenumbers-part2" instead of "/dev/sda2" -- but it basically is a simple symlink. My questions about this: what is the point of such symlinks, especially if we're restoring on a different disk? is there a way to cleanly prevent the creation of those symlinks and use the "true" /dev/sdx everywhere instead? if the previous is no, do you know a way to replace those symlinks on the fly in a text file? I tried this script but only works if the file starts with the symlink description (case of fstab, not menu.lst): ### search and replace /dev/disk/by-id/... to /dev/sdx while read oldVolume rest; do # get first element, ignore rest of line if [[ "$oldVolume" =~ ^/dev/disk/by-id/.*(-part[0-9]*$)? ]]; then newVolume=$(readlink $oldVolume) # replace pointer by pointee, returns "../../sdx" echo /dev/${newVolume##*/} $rest >> TMP # format to "/dev/sdx", write line else echo $oldVolume $rest >> TMP # nothing to do fi done < $file mv -f TMP $file # save changes I've had trouble finding a solution to this on google so I was hoping some of the members here could help me. Thank you.

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • My Boot order changed. Why?

    - by Chris
    I have a laptop running Windows XP SP3 with one internal hard drive partitioned into C: (system), D: (storage) and I have an external hard drive, F: (external drive). Yesterday the machine was running fine. Today, I go back to it and see that it's just showing a blinking cursor. Checked through the BIOS and the hard drive checked out fine. CTRL-ALT-DELETED the machine a few times, but I was never able to boot back into the operating system. I threw in a live CD and found out that the boot order of the drives has changed. The external drive is now C:, the system partiton is D:, and the storage partition is E:. Does anyone have any idea of how or why this would have occured? Auto system updates are turned off so there should have been no automatic reboot of the system overnight, and anti-virius runs on the machine and has found no infections before this occured. Edit When I was looking through the BIOS of the machine, I did see that the boot order was changed. But still the same question remains, what would have caused this to happen? I can't believe that a random reboot happened and totally changed my hard drive setup.

    Read the article

  • Disable the user of Internet explorer through policies when called from HTML help

    - by Stephane
    Hello, I have a locked down environment where users are prohibited from doing, well, basically anything but run the specific programs we specify. We just switched a program from using the venerable "WinHELP" help format to HTML help (CHM) but that seem to have an unwanted and rather dangerous side effect: when a user click on a hyperlink inside the HTML help, a new internet explorer window is opened and the user is free to browse and do terrible things to my server (well, not that much, but still...) I have checked the session in this case and the IE window is actually hosted within the help engine: there is no iexplore.exe process running in the user session (and it cannot: it's explicitly prohibited). We have disable all help right now until we find a solution. I'm working with the help team to have all external URLs removed from the help file but that is going to be a long and error-prone task. Meanwhile, I've checked all the group policies option but I have to say that I was unable to find anything that would prevent a standalone IE window hosted in a random process from running. I don't want to disable WinHTTP or the IE rendering engine or anything of the sort. But I need to prevent all users members of a specific AD user group from ever having an IE window displayed to them. The servers are running Windows 2003 and Citrix metaframe 4.5. Thanks in advance

    Read the article

  • Dell 2970 - HP 1/8 G2 autoloader keeps falling off LSI 2032 SCSI chain

    - by middaparka
    I've a somewhat irritating problem with a Dell 2970 that has a HP 1/8 G2 autoloader (the Ultrium LTO 2 model) attached to the Dell/LSI 2032 non-RAID SCSI card. In essence, sometimes the autoloader/drive completely fails to appear on the SCSI chain (i.e.: there's neither a media changer or tape drive present within the device manager) and sometimes it appears but then subsequently disappears at a seemingly random (yet always inconvenient) time, resulting in backup failures. On most occasions, there are simply no errors logged in the system event log, but I did manage to capture a series of LSI_SCSI event ID 11 ("The driver detected a controller error on \Device\RaidPort0") errors followed by an event ID 129, ("Reset to device, \Device\RaidPort0, was issued") error during testing. I've tried two different cables, both with the same effect – sometimes the autoloader appears (for a while), sometimes it's completely absent. There's only one terminator I've tried to use, but as I've since successfully tested the autoloader on multiple occasions (albeit via a Adaptec U160 card on a different machine), my gut feel is that the issue doesn't lie with the terminator, or indeed the autoloader itself. As such, I'm just wondering if anyone has any ideas? It's most likely not relevant, but this is all under Windows SBS 2008, running Backup Exec 12.5 SBS edition (the Dell version), both fully patched. Addidtionally, the autoloader is running the latest firmware. It's been a while since I've dealt with anything SCSI, so all suggestions will be gratefully, gratefully received.

    Read the article

  • how to back up data from a machine that keeps hanging

    - by Amit Phatarphekar
    Hello - I have a storage server running opensolaris. But lately its been acting up - it hangs at random times due to some SCSI/ATA related error messages. I've tried to fix it without any progress, so I'm giving up now. The machine keeps hanging every 30 minutes or 1 hr ...sometimes after 4 hrs. Its very unpredictable. So I've decided to just reformat the storage server and start from scratch...maybe I'll just not use solaris and install something else, since the errors are related to solaris running on ATA HDD or something. Question - Before I reformat it, I want to back up some of the important data on it. Like it has a VM with 200 GB disk files, it has a whole bunch of ISOs stored on it etc etc. I'm using a simple scp to copy the files over to a different machine. My issue is that, because the machine hangs....sometimes my file copy is incomplete and I have to start all over again. Lets say I'm trying to copy a 200GB file which takes like 4 hrs....IF the machine hangs before the whole file i copied over...I have to recopy the file from scratch. Is there a solution to copy the files over such that if the machine hangs or network goes down..the copying can resume from where it left off? - like if 50 GB of a 200GB file was copied and machine hung....next time, it'll just continue to copy rest of the amount, instead of starting all over again. Thanks Amit

    Read the article

  • Bandwidth Suggestion

    - by Campo
    I have been asked to analyze the bandwidth usage of a company and make a recommendation for upgrading their Internet connection(s). Here is the layout 3 DLS lines so it is 3x(6 Down, 1 Up Each) into a load balancer out to the office's network. 30 VOIP phones run on a T1 (1.5 Down, 1.5 Up) The users at the company are heavily uploading. It is my suspicion that the issue in slowdown is being cause by multiple people uploading and others not being able to get requests out for even simple http requests. My initial idea is to get them a fiber line with a 10 down and 10 up. What do others think on this plan? Will that be enough to host their network traffic? What do I do about the VOIP line afterward? The fiber is expensive and I know the T1 does a great job for their VOIP so I do not want to suggest a DSL line because I know it may not be sufficient. I would also like to save them some money if I can. Maybe even get a faster fiber line and forgo the T1. Though I know their load balance/switch can only handle 20MB/S throughput. Looking for some confirmation/suggestions on my plan. I am planning on going in to get some real diagnostic numbers. Any suggestions on software to use for that? Preferably Windows software.

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >