Search Results

Search found 3393 results on 136 pages for 'friend of george'.

Page 18/136 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Active Directory Time Synchronisation - Time-Service Event ID 50

    - by George
    I have an Active Directory domain with two DCs. The first DC in the forest/domain is Server 2012, the second is 2008 R2. The first DC holds the PDC Emulator role. I sporadically receive a warning from the Time-Service source, event ID 50: The time service detected a time difference of greater than %1 milliseconds for %2 seconds. The time difference might be caused by synchronization with low-accuracy time sources or by suboptimal network conditions. The time service is no longer synchronized and cannot provide the time to other clients or update the system clock. When a valid time stamp is received from a time service provider, the time service will correct itself. Time sync in the domain is configured with the second DC to synchronise using the /syncfromflags:DOMHIER flag. The first DC is configured to sync time using a /syncfromflags:MANUAL /reliable:YES, from a peerlist consisting of a number of UK based stratum 2 servers, such as ntp2d.mcc.ac.uk. I'm confused why I receive this event warning. It implies that my PDC emulator cannot synchronise time with a supposedly reliable external time source, and it quotes a time difference of 5 seconds for 900 seconds. It's worth also mentioning that I used to use a UK pool from ntp.org but I would receive the warning much more often. Since updating to a number of UK based academic time servers, it seems to be more reliable. Can someone with more experience shed some light on this - perhaps it is purely transient? Should I disregard the warning? Is my configuration sound? EDIT: I should add that the DCs are virtual, and installed on two separate VMware ESXi/vSphere physical hosts. I can also confirm that as per MDMarra's comment and best practice, VMware timesync is disabled, since: c:\Program Files\VMware\VMware Tools\VMwareToolboxCmd.exe timesync status returns Disabled. EDIT 2 Some strange new issue has cropped up. I've noticed a pattern. Originally, the event ID 50 warnings would occur at about 1230pm each day. This is interesting since our veeam backup happens at 12 midday. Since I made the changes discussed here, I now receive an event ID 51 instead of 50. The new warning says that: The time sample received from peer server.ac.uk differs from the local time by -40 seconds (Or approximately 40 seconds). This has happened two days in a row. Now I'm even more confused. Obviously the time never updates until I manually intervene. The issue seems to be related to virtualisation and veeam. Something may be occuring when veeam is backing up the PDCe. Any suggestions? UPDATE & SUMMARY msemack's excellent list of resources below (the accepted answer) provided enough information to correctly configure the time service in the domain. This should be the first port of call for any future people looking to verify their configuration. The final "40 second jump" issue I have resolved (there are no more warnings) through adjusting the VMware time sync settings as noted in the veeam knowledge base article here: http://www.veeam.com/kb1202 In any case, should any future reader use ESXi, veeam or not, the resources here are an excellent source of information on the time sync topic and msemack's answer is particularly invaluable.

    Read the article

  • One domain hiding two servers

    - by George DSeas
    For our SaaS web-app we have two identical servers in two geographically separated data centers. FOO_1 is the production server and does real-time (MySQL master-slave) replication to its backup F00_2. We want our users to always go to THEFOO.COM which somehow points to the production server. So even if FOO_1 dies, we can just switch THEFOO.COM to redirect to FOO_2 so the failure is transparent. This switch can be manual or automatic but without failback (if FOO_1 somehow becomes available again). Is there a way to do this with DNS? I am getting stuck with ANAME and CNAMEs configuration. We don't use sub-domains, just straight domains. If not, what are other options? Does it make sense to just have a web server at LOVELY_FOO.COM and just redirect all traffic? I also looked at load balancers but didn't see a solution for across data centers/network providers.

    Read the article

  • How to get Windows 7 to automatically connect to an ad-hoc network?

    - by George Edison
    I have two machines - one is running Ubuntu 12.04 64-bit and the other is running Windows 7 Starter Edition (32-bit). The Ubuntu machine is connected to the Internet via the eth0 interface. That machine also has a wireless network interface (wlan0) that is currently functioning as an ad-hoc network. I can connect to the ad-hoc network just fine with the Windows machine but each time I wish to do so, I must manually initiate the connection and enter the password. Is there some way to instruct Windows to automatically connect to this network (an option I have for standard wireless networks but not ad-hoc networks)?

    Read the article

  • iPhone image asset recommended resolution/dpi/format

    - by Matthew
    I'm learning iPhone development and a friend will be doing the graphics/animation. I'll be using cocos2d most likely (if that matters). My friend wants to get started on the graphics, and I don't know what image resolution or dpi or formats are recommended. This probably depends on if something is a background vs. a small character. Also, I know I read something about using @2x in image file names to support high res iphone screens. Does cocos2d prefer a different way? Or is this not something to worry about at this point? What should I know before they start working on the graphics?

    Read the article

  • How to know my free disk space on web hosting server?

    - by Abu
    I have got some work from my friend for updating his website. Earlier his website was made by some other person and he used to maintain all the stuff. Now that developer has given only the ftp username and password to my friend. He asks me to update his website. But the problem is I don't know how to access the things for this particular web hosting account like knowing the available free space, accesing email account, etc. I asked him about website control panel but he says that he doesn't know about. Is there any other site/client program/control panel that I can use to manage that website. So can any one help me out?

    Read the article

  • Unable to start auditd

    - by George Reith
    I am on CentOS 5.8 final I recently installed auditd via yum install audit however I am unable to start it. I edited the configuration file to give a verbose output of the error it is recieving in starting up and this is the output: # service auditd start Starting auditd: Config file /etc/audit/auditd.conf opened for parsing log_file_parser called with: /var/log/audit/audit.log log_format_parser called with: RAW log_group_parser called with: root priority_boost_parser called with: 4 flush_parser called with: INCREMENTAL freq_parser called with: 20 num_logs_parser called with: 4 qos_parser called with: lossy dispatch_parser called with: /sbin/audispd name_format_parser called with: NONE max_log_size_parser called with: 5 max_log_size_action_parser called with: ROTATE space_left_parser called with: 75 space_action_parser called with: SYSLOG action_mail_acct_parser called with: root admin_space_left_parser called with: 50 admin_space_left_action_parser called with: SUSPEND disk_full_action_parser called with: SUSPEND disk_error_action_parser called with: SUSPEND tcp_listen_queue_parser called with: 5 tcp_max_per_addr_parser called with: 1 tcp_client_max_idle_parser called with: 0 enable_krb5_parser called with: no GSSAPI support is not enabled, ignoring value at line 30 krb5_principal_parser called with: auditd GSSAPI support is not enabled, ignoring value at line 31 Started dispatcher: /sbin/audispd pid: 3097 type=DAEMON_START msg=audit(1339336882.187:9205): auditd start, ver=1.8 format=raw kernel=2.6.32-042stab056.8 auid=4294967295 pid=3095 res=success config_manager init complete Error setting audit daemon pid (Connection refused) type=DAEMON_ABORT msg=audit(1339336882.189:9206): auditd error halt, auid=4294967295 pid=3095 res=failed Unable to set audit pid, exiting The audit daemon is exiting. Error setting audit daemon pid (Connection refused) [FAILED] The only information I can find online is that this may be due to SELinux, however SELinux is giving me problems of it's own. No matter what I do it appears to be disabled (I want to enable it). The configuration is set to enforced and the server has been rebooted many a time however sestatus still returns SELinux status: disabled. Can anyone shine some light on this problem? EDIT: I don't know if it is related but I noticed the following message appearing in my /var/log/messages Jun 10 16:25:22 s1 iscsid: iSCSI logger with pid=2056 started! Jun 10 16:25:22 s1 iscsid: Missing or Invalid version from /sys/module/scsi_transport_iscsi/version. Make sure a up to date scsi_transport_iscsi module is loaded and a up todate version of iscsid is running. Exiting... I try to start the iSCSI daemon myself (I have not a clue what it does; I am a linux newbie) and I get the following error: Starting iSCSI daemon: FATAL: Could not load /lib/modules/2.6.32-042stab056.8/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.32-042stab056.8/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.32-042stab056.8/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.32-042stab056.8/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.32-042stab056.8/modules.dep: No such file or directory [FAILED] If I go to /lib/modules/ I notice the directory exists but is completely empty.

    Read the article

  • how do I import a targa sequence as a composition in after effects cs4 ?

    - by George Profenza
    I am a complete n00b with After Effects now, but I want to achieve something basic. I have a sequence of TARGA(.tga) files named alphanumerically(name_0000, where 0000 is the frame). I want to import those as a composition. What I'm after is having each .tga file siting in the timeline in sequence. I tried File Import File, selected the first, then selected the last, holding Shift, and ticked sequence, but I cannot select Composition from that Dialog, I can only select Footage and I don't knwow why. Hints ?

    Read the article

  • Slow Citrix connection related to mapped network drives

    - by George
    I have this weird issue with Citrix being slow and maybe users just being a little dramatic, but I am curious as to why that happens. Let me give you a little bit of a background. Citrix is running off of Windows 2003 server, TSprofiles and file server were located on the same server, until recently. We have moved our file server over to a new server with tons of space. We have Citrix on one server, TSprofiles on another and file server on third. We are using logon scripts to map home drives, shared drive and etc. Now, up until we made the file server move, the logon process took several seconds and most users couldn't even notice logon script being executed as they logon. Now, it takes upwards of several minutes and users can see logon script being executed at a slow pace, one line at a time. The only new variable in this whole scenario is the new file server. All the servers are physically located in the same location and on the same subnet. So, I guess my question is, if anyone can explain why a sudden sluggishness? And any tools I can use to troubleshoot the issue? Thanks!

    Read the article

  • Bypass IIS Basic Authentication for localhost

    - by George
    I'd like to have a website authenticated with basic auth, but then also allow the website to access itself locally. That is, I want to allow unauthenticated access only from localhost. In IIS I have only basic authentication enabled (not worrying about SSL for now), and I have the correct file system permissions such that outside users can login successfully and view the website. I have tried setting IIS_IUSR as owner of the directory, and added IUSR with modify permissions, however I'm still getting a 401 error when the website tries to access itself. Anyone have any idea how to get this to work?

    Read the article

  • Mac OS X software always order files alphabetically rather than by type.

    - by george
    I have noticed many Mac applications sort the files alphabetically rather than by type. A good example would be Coda by panic.com. The files in the file menu are organized alphabetically. I requested for them to add the feature to organize files by type, and they've said that it's a Finder thing. So I looked at other applications to see if they were organizing by type. I noticed Dreamweaver CS4 had this same problem and now including Dreamweaver CS5. There has to be something in the Mac that does this and that I can modify. I played with Spotlight and it now displays its files by type (thinking that's what I can do) but it didn't take effect in other applications. What library are these applications using to display a file menu for their files? here is an example-- file menu layout of coda by panic.com. (i couldnt post another link because it wouldnt let me). can you see how everything is organised alphabetically rather than by folder? i just want the file menu to show all folders first then all the files. 1) http://www.iaddesign.com/coda.png there must be a way to modify mac to let me to do this.

    Read the article

  • Mac OS X software always order files alphabetically rather than by type.

    - by george
    I have noticed many Mac applications sort the files alphabetically rather than by type. A good example would be Coda by panic.com. The files in the file menu are organized alphabetically. I requested for them to add the feature to organize files by type, and they've said that it's a Finder thing. So I looked at other applications to see if they were organizing by type. I noticed Dreamweaver CS4 had this same problem and now including Dreamweaver CS5. There has to be something in the Mac that does this and that I can modify. I played with Spotlight and it now displays its files by type (thinking that's what I can do) but it didn't take effect in other applications. What library are these applications using to display a file menu for their files?

    Read the article

  • Amanda Todd&ndash;What Parents Can Learn From Her Story

    - by D'Arcy Lussier
    Amanda Todd was a bullied teenager who committed suicide this week. Her story has become headline news due in part to her You Tube video she posted telling her story:   The story is heartbreaking for so many reasons, but I wanted to talk about what we as parents can learn from this. Being the dad to two girls, one that’s 10, I’m very aware of the dangers that the internet holds. When I saw her story, one thing jumped out at me – unmonitored internet access at an early age. My daughter (then 9) came home from a friends place once and asked if she could be in a YouTube video with her friend. Apparently this friend was allowed to do whatever she wanted on the internet, including posting goofy videos. This set off warning bells and we ensured our daughter realized the dangers and that she was not to ever post videos of herself. In looking at Amanda’s story, the access to unmonitored internet time along with just being a young girl and being flattered by an online predator were the key events that ultimately led to her suicide. Yes, the reaction of her classmates and “friends” was horrible as well, I’m not diluting that. But our youth don’t fully understand yet that what they do on the internet today will follow them potentially forever. And the people they meet online aren’t necessarily who they claim to be. So what can we as parents learn from Amanda’s story? Parents Shouldn’t Feel Bad About Being Internet Police Our job as parents is in part to protect our kids and keep them safe, even if they don’t like our measures. This includes monitoring, supervising, and restricting their internet activities. In our house we have a family computer in the living room that the kids can watch videos and surf the web. It’s in plain view of everyone, so you can’t hide what you’re looking at. If our daughter goes to a friend’s place, we ask about what they did and what they played. If the computer comes up, we ask about what they did on it. Luckily our daughter is very up front and honest in telling us things, so we have very open discussions. Parents Need to Be Honest About the Dangers of the Internet I’m sure every generation says that “kids grow up so fast these days”, but in our case the internet really does push our kids to be exposed to things they otherwise wouldn’t experience. One wrong word in a Google search, a click of a link in a spam email, or just general curiosity can expose a child to things they aren’t ready for or should never be exposed to (and I’m not just talking about adult material – have you seen some of the graphic pictures from war zones posted on news sites recently?). Our stance as parents has been to be open about discussing the dangers with our kids before they encounter any content – be proactive instead of reactionary. Part of this is alerting them to the monsters that lurk on the internet as well. As kids explore the world wide web, they’re eventually going to encounter some chat room or some Facebook friend invite or other personal connection with someone. More than ever kids need to be educated on the dangers of engaging with people online and sharing personal information. You can think of it as an evolved discussion that our parents had with us about using the phone: “Don’t say ‘I’m home alone’, don’t say when mom or dad get home, don’t tell them any information, etc.” Parents Need to Talk Self Worth at Home Katie makes the point better than I ever could (one bad word towards the end): Our children need to understand their value beyond what the latest issue of TigerBeat says, or the media who continues flaunting physical attributes over intelligence and character, or a society that puts focus on status and wealth. They also have to realize that just because someone pays you a compliment, that doesn’t mean you should ignore personal boundaries and limits. What does this have to do with the internet? Well, in days past if you wanted to be social you had to go out somewhere. Now you can video chat with any number of people from the comfort of wherever your laptop happens to be – and not just text but full HD video with sound! While innocent children head online in the hopes of meeting cool people, predators with bad intentions are heading online too. As much as we try to monitor their online activity and be honest about the dangers of the internet, the human side of our kids isn’t something we can control. But we can try to influence them to see themselves as not needing to search out the acceptance of complete strangers online. Way easier said than done, but ensuring self-worth is something discussed, encouraged, and celebrated is a step in the right direction. Parental Wake Up Call This post is not a critique of Amanda’s parents. The reality is that cyber bullying/abuse is happening every day, and there are millions of parents that have no clue its happening to their children. Amanda’s story is a wake up call that our children’s online activities may be putting them in danger. My heart goes out to the parents of this girl. As a father of daughters, I can’t imagine what I would do if I found my daughter having to hide in a ditch to avoid a mob or call 911 to report my daughter had attempted suicide by drinking bleach or deal with a child turning to drugs/alcohol/cutting to cope. It would be horrendous if we as parents didn’t re-evaluate our family internet policies in light of this event. And in the end, Amanda’s video was meant to bring attention to her plight and encourage others going through the same thing. We may not be kids, but we can still honour her memory by helping safeguard our children.

    Read the article

  • Vmware Workstation downloads as a txt file?

    - by George Mauer
    I just went to the vmware website because I want to try workstation over virtualbox. I signed up for a workstation trial and clicked download on the 64bit linux version. What downloaded is a 320 megabyte txt file VMware-Workstation-Full-8.0.2-591240.x86_64.txt What gives? Is anyone familiar with this pattern of delivering software? How do I run it? Here is the beginning of that file: #!/usr/bin/env bash # # VMware Installer Launcher # # This is the executable stub to check if the VMware Installer Service # is installed and if so, launch it. If it is not installed, the # attached payload is extracted, the VMIS is installed, and the VMIS # is launched to install the bundle as normal. # Architecture this bundle was built for (x86 or x64) ARCH=x64 if [ -z "$BASH" ]; then # $- expands to the current options so things like -x get passed through if [ ! -z "$-" ]; then opts="-$-" fi # dash flips out of $opts is quoted, so don't. exec /usr/bin/env bash $opts "$0" "$@" echo "Unable to restart with bash shell" exit 1 fi set -e ETCDIR=/etc/vmware-installer OLDETCDIR="/etc/vmware" ### Offsets ### # These are offsets that are later used relative to EOF. FOOTER_SIZE=52 # This won't work with non-GNU stat. FILE_SIZE=`stat --format "%s" "$0"` offset=$(($FILE_SIZE - 4)) MAGIC_OFFSET=$offset offset=$(($offset - 4)) CHECKSUM_OFFSET=$offset offset=$(($offset - 4)) VERSION_OFFSET=$offset offset=$(($offset - 4)) PREPAYLOAD_OFFSET=$offset

    Read the article

  • Joining computers from workgroup to active directory

    - by George R
    I have several computers at my office that I want to put on our domain. These computers currently are used by employees with local computer accounts and they have information stored under these accounts. When I join their computer to the domain how would or could I keep their current computer accounts and add them to the Active Directory so they could log in as usual access the network resources? Is this possible or do I need to just start from scratch on all this with their accounts and locally stored files? We are using Windows 2008 R2 and all systems being added have the Windows 7 pro or higher. All I want to really do is add the systems to the domain and have their accounts in the Active Directory so they can log in, access files which are already on their computer, and use network reosoures. If I can add their computer and then add same username and password to Active Directory to get this all to work that would be fine. I am just looking for minimal impact on the user really to get this done.

    Read the article

  • How to install Windows 7 From Network?

    - by George
    Hello SuperUser My question is that is it possible to install Windows 7 (Current RTM Version) on a computer without using removable media like DVD or USB. First thing that comes to my mind is network but i dont have experience of doing Fresh Install of Windows 7 via Network. How to install Windows 7 via network without any removable media? P.S. I know some may think that doing so, is just a waste of time and it's easier to do it with removable media, but in current situation the target PC neather has CD/DVD Drive nor supports booting from USB. And in addition to that, Target computer is connected to Network via Wireless Network (Dunno if it will make any problem with installation).

    Read the article

  • Windows XP laptop doesn't appear in WSUS All computers list

    - by George
    I have this one laptop that doesn't appear in WSUS all computers list. We have about 23-25 PCs/laptops/servers in the network, all, but one are listed in WSUS. This is what I have done so far: 1) Changing Updates on local PC: Go to your Windows XP client and start a new Microsoft Management Console (MMC). At Start, Run, type MMC. Use Ctrl+M to add a new snap-in. Click Add, and then add the Group Policy Object Editor for the Local Computer. Click Close, and then click OK. Expand the Local Computer Policy. Under Computer Configuration, go to Administrative Templates, Windows Components, Windows Update. In the right-hand pane, double-click Specify intranet Microsoft update service location. Configure the settings to reflect my WSUS server. Click OK and then close the MMC without saving it. executed wuauclt.exe /detectnow 2) Edited registry key to be pushed to the PCs using GPO [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate] "WUServer"=http://wsusserver "TargetGroupEnabled"=dword:00000001 "TargetGroup"="WINXP" "WUStatusServer"=http://wsuswerver 3) executed wuauclt /resetauthorization /detectnow 4)Synchronised and refreshed the group I am running out of ideas here. The client is running Windows XP pro, WSUS version is 3.0 and is running on Windows 2008 R2 64-bit. Please, help! Thanks! EDIT 13.IX.2012 @ 15.40 I should have also mentioned that we do have a Windows Update GPO for workstations group and that laptop is a part of that group.

    Read the article

  • Access Java based keystore directly on Sun ONE Webserver 6.1

    - by George Bailey
    The keystore seems to reside in one of /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-cert8.db /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-key3.db What tool would I use to access this file? I have tried these commands which did not work. /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -certreq -keyalg RSA -file /tmp/test.csr -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-cert8.db /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -certreq -keyalg RSA -file /tmp/test.csr -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-key3.db /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -list -storepass password -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-cert8.db /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -list -storepass password -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-key3.db They all gave me the error message keytool error: java.io.IOException: Invalid keystore format

    Read the article

  • Verify server performance

    - by George Kesler
    I'm looking for a quick and SIMPLE way to verify that new servers are performing as expected. The most important metric is disk performance, second is network performance. I’m trying to prevent problems caused by misconfiguration of RAID arrays, NIC teaming etc. The solution should work with both physical and virtual servers. I don’t need sophisticated analysis with different workloads, just one set of benchmarks which I would run against a reference server and later compare to new ones. One problem is that most benchmarks are not giving accurate results when running on a VM.

    Read the article

  • How can I get data off of a (probably) very corrupt drive?

    - by Mercury1964
    So, my younger brother wanted my help today. Apparently, he was helping a friend install Ubuntu to his laptop, and midway through the install (on a filled hard-drive with Windows 8, from the liveusb thing): 1) They realized that they had chosen "Erase and install" accidentally and decided the best course of action was to (in the middle of an install, remember) 2) Force shutdown the computer. After I had replaced my eyeballs back into their sockets, my bro asked if I could do anything about his friend's data, which he wanted back. This, however, drifts out of my normal comfort zone. I know this about the install: 1) It was on a fairly new Windows 8 laptop, so it had whatever filesystem it uses nowadays on the entire drive 2) They didn't choose the "zero out drive data" option 3) They stopped the install at some point (not sure if before or after wipe) I can imagine that it now has two corrupted filesystems on it (whatever Windows 8 was on and ext3) and that some of the data still exists (assuming it wasn't overwritten already). Is there anything that can be done for whatever data is left on the drive?

    Read the article

  • Add IPv6 support to DirectAdmin server

    - by George Boot
    I just set up an new DirectAdmin, and I want to prepare it for IPv6 use. My ISP have gave me an range of IPv6 addresses that I can use. Lets say that address is 2a01:7c8:**:1f::. My neworkadapter user DHCP to resolves its IP-addresses. When i type ifoncig eth0 I get the following result: eth0 Link encap:Ethernet HWaddr 52:**:**:**:ce:f3 inet addr:37.**.**.44 Bcast:37.**.**.255 Mask:255.255.255.0 inet6 addr: 2a01:7c8:****:1f::/64 Scope:Global inet6 addr: fe80::5054:ff:fe87:cef3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38941 errors:0 dropped:0 overruns:0 frame:0 TX packets:29439 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3779534 (3.6 MiB) TX bytes:5089379 (4.8 MiB) As you can see, I have an IPv6 address set, but I can't ping6 an IPv6 host. I get the error: connect: Network is unreachable. I decided that I needed an gateway, so I tryed to add one: ip -6 route add default via 2a01:7c8:****::1 dev eth0 (2a01:7c8:**::1 is the gateway of my ISP). But it trows an error: RTNETLINK answers: No route to host. Does somebody know what to do, and how to solve this issue? Thanks a lot!

    Read the article

  • IOGEAR KVM cannot reset admin account

    - by George Horlacher
    According to the manual you can do a cold reset by holding the reset button in for 3 seconds. This does seem to reset it but the username and password are not reset... I've tried the keyboard hotkeys - Num-Lock and minus sign (-) to get into hotkey mode and then use the "r" and "" to reset to factory defaults but that doesnt seem to do anything either. It just reverts to User1. I'm not sure what the default admin username should be but admin doesnt seem to work I need to get to the admin menu to name the ports but cannot get access. Any ideas? Using GCS1758 manual with no luck.

    Read the article

  • Varnish, hide port number

    - by George Reith
    My set up is as follows: OS: CentOS 6.2 running on an OpenVZ virtual machine. Web server: Nginx listening on port 8080 Reverse proxy: Varnish listening on port 80 The problem is that Varnish redirects my requests to port 8080 and this appears in the address bar like so http://mysite.com:8080/directory/, causing relative links on the site to include the port number (8080) in the request and thus bypassing Varnish. The site is powered by WordPress. How do I allow Varnish to use Nginx as the backend on port 8080 without appending the port number to the address? Edit: Varnish is set up like so: I have told the Varnish daemon to listen to port 80 by default. VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= VARNISH_LISTEN_PORT=80 # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 # # # Shared secret file for admin interface VARNISH_SECRET_FILE=/etc/varnish/secret # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 The VCL file that Varnish calls (through an include in default.vcl) consists of: backend playwithbits { .host = "127.0.0.1"; .port = "8080"; } acl purge { "127.0.0.1"; } sub vcl_recv { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { set req.backend = playwithbits; set req.http.Host = regsub(req.http.Host, ":[0-9]+", ""); if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_hit { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } } sub vcl_miss { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_fetch { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; } } }

    Read the article

  • chrooting user causes "connection closed" message when using sftp

    - by George Reith
    First off I am a linux newbie so please don't assume much knowledge. I am using CentOS 5.8 (final) and using OpenSSH version 5.8p1. I have made a user playwithbits and I am attempting to chroot them to the directory home/nginx/domains/playwithbits/public I am using the following match statement in my sshd_config file: Match group web-root-locked ChrootDirectory /home/nginx/domains/%u/public X11Forwarding no AllowTcpForwarding no ForceCommand /usr/libexec/openssh/sftp-server # id playwithbits returns: uid=504(playwithbits) gid=504(playwithbits) groups=504(playwithbits),507(web-root-locked) I have changed the user's home directory to: home/nginx/domains/playwithbits/public Now when I attempt to sftp in with this user I instantly get the message: connection closed Does anyone know what I am doing wrong? Edit: Following advice from @Dennis Williamson I have connected in debug mode (I think... correct me if I'm wrong). I have made a bit of progress by using chmod to set permissions recursively of all files in the directly to 700. Now I get the following messages when I attempt to log on (still connection refused): Connection from [My ip address] port 38737 debug1: Client protocol version 2.0; client software version OpenSSH_5.6 debug1: match: OpenSSH_5.6 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user playwithbits service ssh-connection method none debug1: attempt 0 failures 0 debug1: user playwithbits matched group list web-root-locked at line 91 debug1: PAM: initializing for "playwithbits" debug1: PAM: setting PAM_RHOST to [My host info] debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user playwithbits service ssh-connection method password debug1: attempt 1 failures 0 debug1: PAM: password authentication accepted for playwithbits debug1: do_pam_account: called Accepted password for playwithbits from [My ip address] port 38737 ssh2 debug1: monitor_child_preauth: playwithbits has been authenticated by privileged process debug1: SELinux support disabled debug1: PAM: establishing credentials User child is on pid 3942 debug1: PAM: establishing credentials Changed root directory to "/home/nginx/domains/playwithbits/public" debug1: permanently_set_uid: 504/504 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 2097152 max 32768 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request env reply 0 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req env debug1: server_input_channel_req: channel 0 request subsystem reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req subsystem subsystem request for sftp by user playwithbits debug1: subsystem: cannot stat /usr/libexec/openssh/sftp-server: Permission denied debug1: subsystem: exec() /usr/libexec/openssh/sftp-server debug1: Forced command (config) '/usr/libexec/openssh/sftp-server' debug1: session_new: session 0 debug1: Received SIGCHLD. debug1: session_by_pid: pid 3943 debug1: session_exit_message: session 0 channel 0 pid 3943 debug1: session_exit_message: release channel 0 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from [My ip address]: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >