Search Results

Search found 29245 results on 1170 pages for 'self service'.

Page 602/1170 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • Unable to understand why Alfresco doesn't start on Tomcat

    - by Infernalsirius
    I have a problem that I've been inspecting for a while now, googling and everything but could not begin to understand. I'm really not used to java, even less tomcat. So there it is. First, the setup. Centos 5.3 on a virtualized server. Bitnami Native Alfresco stack (tomcat5.5, mysql5, java, javajdk, JDBC) Content of catalina.log. Since it's the shortest and where I found my first clue to what is going wrong: SEVERE: Error listenerStart Aug 27, 2009 5:32:58 PM org.apache.coyote.http11.Http11BaseProtocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:32:58 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 229 ms Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/5.5.25 Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardHost start INFO: XML validation disabled Aug 27, 2009 5:34:47 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart Aug 27, 2009 5:34:47 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/alfresco] startup failed due to previous errors Aug 27, 2009 5:34:48 PM org.apache.coyote.http11.Http11BaseProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:34:48 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Aug 27, 2009 5:34:48 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/11 config=null Aug 27, 2009 5:34:48 PM org.apache.catalina.storeconfig.StoreLoader load INFO: Find registry server-registry.xml at classpath resource Aug 27, 2009 5:34:48 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 110327 ms Aug 27, 2009 5:38:27 PM org.apache.coyote.http11.Http11BaseProtocol pause INFO: Pausing Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:38:28 PM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina Aug 27, 2009 5:38:29 PM org.apache.coyote.http11.Http11BaseProtocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 There's the content of catalina.out, it seems to be a stack trace or application trace of the error, is that right? Catalina.out gist on github There is a 404 error telling me this: The requested resource (/alfresco/) is not available. This is it. I think.

    Read the article

  • Why is SMF manifest losing configuration data when exported on SmartOS?

    - by Scott Lowe
    I'm running a server process under SMF (Server Management Facility) on Joyent's Base64 1.8.1 SmartOS image. For those not aqauinted with SmartOS, it is a cloud-based distribution of IllumOS with KVM. But essentially it is like Solaris and inherits from OpenSolaris. So even if you've not used SmartOS, I'm hoping to tap into some Solaris knowledge on ServerFault. My issue is that I want an unprivileged user to be allowed to restart a service that they own. I have worked out how to do that by using RBAC and adding an authorisation to /etc/security/auth_attr and associating that authorisation with my user. I then added the following to my SMF manifest for the service: <property_group name='general' type='framework'> <!-- Allow to be restarted--> <propval name='action_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> <!-- Allow to be started and stopped --> <propval name='value_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> </property_group> And this works well when imported. My unprivileged user is allowed to restart, start and stop its own server process (this is for automated code deployments). However, if I export the SMF manifest, this configuration data is gone... all I see in that section is this: <property_group name='general' type='framework'> <property name='action_authorization' type='astring'/> <property name='value_authorization' type='astring'/> </property_group> Does anybody know why this is happening? Is my syntax wrong, or am I simply not using SMF incorrectly?

    Read the article

  • How to install non Microsoft USB device driver on PC without admin rights?

    - by Ron
    I am running Win 7x32 secured corporate laptop. My USB audio device has .exe installer file which is not possible to execute because of having no admin rights. Is it possible to embed driver files in the system without installation? All attempts of unpacking the .exe file got failed. 7zip is extracting files without extensions and Universal Extractor says that .exe file is 7zip self extracting archive. Thank you Ron

    Read the article

  • Postfix: Relay access denied

    - by Joseph Silvashy
    When I telnet to my server thats running postfix and try to send an email: MAIL FROM:<[email protected]> #=> 250 2.1.0 Ok RCPT TO:<[email protected]> #=> 554 5.7.1 <[email protected]>: Relay access denied I couldn't really find the answer on the site or by looking at other users question/answers, I'm not sure where to start. Ideas? Update So basically looking at the docs: http://www.postfix.org/SMTPD_ACCESS_README.html (section: Getting selective with SMTP access restriction lists), I don't seem to have any of those directives in etc/postfix/main.cf like smtpd_client_restrictions = permit_mynetworks, reject or any of the other ones, so I'm quite confused. But really I'm going to have a rails app connect to the server and send the emails, so I'm not sure how to handle it. Here is what my config file looks like: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = rerecipe-utils alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = $myhostname, localhost.$mydomain, localhost, mail.rerecipe.com, rerecipe.com relayhost = mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all mynetworks = 127.0.0.0/8 204.232.207.0/24 10.177.64.0/19 [::1]/128 [fe80::%eth0]/64 [fe80::%eth1]/64 Something to note is that relayhost is blank, this is the default configuration file that was created when I installed Postfix, when testing to connect with openssl I get this: ~% openssl s_client -connect mail.myhostname.com:25 -starttls smtp CONNECTED(00000003) depth=0 /CN=myhostname verify error:num=18:self signed certificate verify return:1 depth=0 /CN=myhostname verify return:1 --- Certificate chain 0 s:/CN=myhostname i:/CN=myhostname --- Server certificate -----BEGIN CERTIFICATE----- MIIBqTCCARICCQDDxVr+420qvjANBgkqhkiG9w0BAQUFADAZMRcwFQYDVQQDEw5y ZXJlY2lwZS11dGlsczAeFw0xMDEwMTMwNjU1MTVaFw0yMDEwMTAwNjU1MTVaMBkx FzAVBgNVBAMTDnJlcmVjaXBlLXV0aWxzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQDODh2w4A1k0qiPNPhkrPj8sfkxpKPTk28AuZhgOEBYBLeHacTKNH0jXxPv P3TyhINijvvdDPzyuPJoTTliR2EHR/nL4DLhr5FzhV+PB4PsIFUER7arx+1sMjz6 5l/Ubu1ppMzW9U0IFNbaPm2AiiGBQRCQN8L0bLUjzVzwoSRMOQIDAQABMA0GCSqG SIb3DQEBBQUAA4GBALi2vvk9TGKJubXYJbU0PKmVmsfzFK35yLqr0keiDBhK2Leg 274sWxEH3ds8mUaRftuFlXb7RYAGNlVyTuMTY3CEcnqIsH7F2McCUTpjMzu/o1mZ O/B21CelKetBd1u79Gkrv2vWyN7Csft6uTx5NIGG2+pGi3r0gX2r0Hbu2K94 -----END CERTIFICATE----- subject=/CN=myhostname issuer=/CN=myhostname --- No client certificate CA names sent --- SSL handshake has read 1203 bytes and written 360 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 1AA4B8BFAAA85DA9ED4755194C50311670E57C35B8C51F9C2749936DA11918E4 Session-ID-ctx: Master-Key: 9B432F1DE9F3580DCC6208C76F96631DC5A4BC517BDBADD5F514414DCF34AC526C30687B96C5C4742E9583555A118232 Key-Arg : None Start Time: 1292985376 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN Oddly enough when I try to send an email from the machine itself it does work: echo test | mail -s "test subject" [email protected]

    Read the article

  • Best user portal?

    - by John
    Hi guys, Anyone recommend a good user portal, looking to create a portal for users to provide them some self help material. Has to be REALLY simple! Thanks John

    Read the article

  • Unable to send mail to hotmail from rackspace cloud

    - by Jo Erlang
    I'm having issue sending mail from postfix on a rackspace cloud instance for my domain. Hotmail says "550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. " Here is the mail log Sep 20 08:02:59 mydomain postfix/smtpd[1810]: disconnect from localhost[127.0.0.1] Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: to=<[email protected]>, relay=mx3.hotmail.com[65.55.92.184]:25, delay=0.19, delays=0.1/0.01/0.06/0.01, dsn=5.0.0, status=bounced (host mx3.hotmail.com[65.55.92.184] said: 550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command)) Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: lost connection with mx3.hotmail.com[65.55.92.184] while sending RCPT TO I have implemented rDNS, SPF and DKIM they all are looking fine. I have checked my IP and domain, on most of the spam black lists and it is listed as ok on those, (not listed as spamming IP) What should I try next?

    Read the article

  • Typical outbound port list for guest access?

    - by Steve
    I manage a weekly rental house that includes wireless Internet access. I've allowed all outbound ports on my router but my ISP has disabled my Internet access twice now because guests have downloaded (or served up) copyrighted content. So I'd like to institute some port filtering to discourage p2p sharing (see disclaimer below). But I don't want to inconvenience the 99.9% of folks who keep things above-board. My question is, what outbound ports are typically open for rental/hotel wireless Internet access, or where can I find such a list? TCP 80,443,25,110 at a minimum. Though my own email service uses 995 and 465 for SSL, some may use IMAP, I personally use SSH and FTP, so I'll open those. Roughly I figure I need to open access to privileged ports, and close 1024 & above. Is there a whitelist I should institute for commonly used high ports? And does it make sense to block UDP 1024 ? Disclaimer: I realize anyone replying to this message could circumvent the port filtering and share content to their heart's content. I do not need comprehensive p2p blocking, which requires more than a port whitelist. Anyone staying at the house shoulders the responsibility for their Internet use, per the rental contract. Also anyone savvy enough to circumvent the port filters would hopefully be savvy enough to use some sort of peer blocking, thereby preventing the ISP from taking down the service.

    Read the article

  • Export-Mailbox Error

    - by tuck918
    All, I am using export-mailbox to move some data and it is working fine until I get this error: StatusMessage : Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000 This is the command I am using: export-mailbox -identity mailboxA -targetmailbox mailboxB -targetfolder folderA -allowmerge We are on SP2 and I am running this under an account that is not a domain or enterprise admin. THe account has Exchange Server Administrator Permission Both Source and Target Exchange Mailbox Server. THe account is part of the Local Administrators Group Member Both Source and Target Exchange Mailbox Server. This account has Full Access permission on both the target and source servers. THe issue happens at any time and I am only trying to run this on one mailbox, the only mailbox I need to run it on. THe event log is "Error Exchange Migration Export Mailbox Event 1008". The log under migration logs just shows that it was running okay then it gives the same error as above "Error was found for mailboxA ([email protected]) because: Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000, error code: -1056749164" Any ideas on what to do/try?

    Read the article

  • Seeting up DKIM DNS records from an existing certificate

    - by jneves
    I have successfully setup DKIM with dkimproxy with a self-signed certificate. Now I want to use an existing X.509 certificate. The script that comes with dkimproxy on Ubuntu to generate the DNS records results in the following broken information (only the start): postfix._domainkey IN TXT "k=rsa; p=-----BEGIN CERTIFICATE----- MIIHCDCCBfCgAwIBAgICP4AwDQYJKoZIhvcNAQEFBQAwgYwxCzAJBgNVBAYTAklM MRYwFAYDVQQKEw1TdGFydENvbSBMdG This seems broken to me but I haven't found: what's the format the public.key should be for dkimproxy? how to extract that information from the certificate file?

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • placing shell script under systemd control

    - by Calvin Cheng
    Assuming I have a shell script like this:- #!/bin/sh # cherrypy_server.sh PROCESSES=10 THREADS=1 # threads per process BASE_PORT=3035 # the first port used # you need to make the PIDFILE dir and insure it has the right permissions PIDFILE="/var/run/cherrypy/myproject.pid" WORKDIR=`dirname "$0"` cd "$WORKDIR" cp_start_proc() { N=$1 P=$(( $BASE_PORT + $N - 1 )) ./manage.py runcpserver daemonize=1 port=$P pidfile="$PIDFILE-$N" threads=$THREADS request_queue_size=0 verbose=0 } cp_start() { for N in `seq 1 $PROCESSES`; do cp_start_proc $N done } cp_stop_proc() { N=$1 #[ -f "$PIDFILE-$N" ] && kill `cat "$PIDFILE-$N"` [ -f "$PIDFILE-$N" ] && ./manage.py runcpserver pidfile="$PIDFILE-$N" stop rm -f "$PIDFILE-$N" } cp_stop() { for N in `seq 1 $PROCESSES`; do cp_stop_proc $N done } cp_restart_proc() { N=$1 cp_stop_proc $N #sleep 1 cp_start_proc $N } cp_restart() { for N in `seq 1 $PROCESSES`; do cp_restart_proc $N done } case "$1" in "start") cp_start ;; "stop") cp_stop ;; "restart") cp_restart ;; *) "$@" ;; esac From the bash script, we can essentially do 3 things: start the cherrypy server by calling ./cherrypy_server.sh start stop the cherrypy server by calling ./cherrypy_server.sh stop restart the cherrypy server by calling ./cherrypy_server.sh restart How would I place this shell script under systemd's control as a cherrypy.service file (with the obvious goal of having systemd start up the cherrypy server when a machine has been rebooted)? Reference systemd service file example here - https://wiki.archlinux.org/index.php/Systemd#Using_service_file

    Read the article

  • Cannot change PostgreSQL port

    - by Jerec TheSith
    I run Postgresql 8.4 as a service on a CentOS 6.2 server. I set port = 21444 and listen_addresses = '*' in /var/lib/pgsql/data/postgresql.conf and I changed 5432 to 21444 in postmaster.opts and restarted postgres, but when I run netstat -lntp postgresql is still running on port 5432 tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 20276/postmaster When I restart postgresql I get a writting error warning on /proc/self/oom_adj, but the service starts anyway. I read that we could get this error when using virtualized servers, but I don't really know if this has inpact on postgresql listening port. The correct pgsql config file is loaded in /var/lib/pgsql/data : [root@srv02 ~]# ps -ef | grep postgres root 1358 22140 0 09:42 pts/0 00:00:00 grep postgres postgres 9519 1 0 Mar16 ? 00:00:01 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data postgres 9573 9519 0 Mar16 ? 00:00:00 postgres: logger process postgres 9575 9519 0 Mar16 ? 00:00:05 postgres: writer process postgres 9576 9519 0 Mar16 ? 00:00:03 postgres: wal writer process postgres 9577 9519 0 Mar16 ? 00:00:01 postgres: autovacuum launcher process postgres 9578 9519 0 Mar16 ? 00:00:01 postgres: stats collector process any thought ? thanks, Jerec

    Read the article

  • Can't log in using second domain controller when first DC is unreachable

    - by rbeier
    Hi, We're a small web development company. Our domain has two DCs: a main one (BEEHIVE, 192.168.3.20) in the datacenter and a second one (SPHERE2, 10.0.66.19) in the office. The office is connected to the datacenter via a VPN. We recently had a brief network outage in the office. During this outage, we weren't able to access the domain from our office machines. I had hoped that they would fail over to the DC in the office, but that didn't happen. So I'm trying to figure out why. I'm not an expert on Active Directory so maybe I'm missing something obvious. Both domain controllers are running a DNS server. Each office workstation is configured to use the datacenter DC as its primary DNS server, and the office DC as its secondary: DNS Servers . . . . . . . . . . . : 192.168.3.20 10.0.66.19 Both DNS servers are working, and both domain controllers are working (at least, I can connect to them both using AD Users + Computers). Here are the SRV records that point to the domain controllers (I've changed the domain name but I've left the rest alone): C:\nslookup Default Server: beehive.ourcorp.com Address: 192.168.3.20 set type=srv _ldap._tcp.ourcorp.com Server: beehive.ourcorp.com Address: 192.168.3.20 _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = beehive.ourcorp.com _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = sphere2.ourcorp.com beehive.ourcorp.com internet address = 192.168.3.20 sphere2.ourcorp.com internet address = 10.0.66.19 Does anyone have any ideas? Thanks, Richard

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • Is there the equivalent of cloud computing for modems?

    - by morpheous
    I asked this question on SF, and someone recommended that I ask it here - (I don't think I have enough points to move a question from SF to SO - and in any case, I don't know how to do it - so here is the question again): I am interested in the concept of PAAS (platform as a service). However, all talk about SAAS/PAAS seems to focus on only the computer itself - not its peripherals. Is it possible to 'outsource' modems as a resource - so that an app running remotely can pump data to a modem in the cloud? As a bit of background to the question, a group of us are thinking of starting a company that offers similar services to companies like twilio etc - but I want to 'outsource' both the computing hardware (thats PAAS - the easy bit) and the modems (thats what I cant seem to find any info on). Does anyone know if modems can be bundled as part of a PAAS service? - alternatively, is there a way that an application running on one computer can communicate (i.e. pump data) to a remote modem residing on another machine?. I assume I can come up with some protocol over UDP or TCP - but there is no point reinventing the wheel - if such a protocol like that already exists (or if it some open source software allows one to do this). Any suggestions on how to solve this problem?

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • Dell Optiplex GX620 > windows xp home addition OU7670

    - by Ren D
    My question is my Dell Optiplex 620 is not working properly !! when i turn it on it goes automaticaly to bios and says click on safe mode or safe network or best last start up or normal mode ?? but when i click on normal it comes back to that window then i click on safe mode and it works for about 5 min then it reboots its self every time what can i do ? i checked if there was a virus and its clean !! please help

    Read the article

  • Starting nginx with systemctl fails, but running the command manually doesn't

    - by Ivan
    On Arch Linux, for some reason, when I try to start nginx with the command "systemctl start nginx", it fails, with this being the output of "systemctl status nginx": Loaded: loaded (/etc/systemd/system/nginx.service; enabled) Active: failed (Result: exit-code) since Wed 2013-10-30 16:22:17 EDT; 5s ago Process: 9835 ExecStop=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g pid /run/nginx.pid; -s quit (code=exited, status=126) Process: 3982 ExecStart=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=0/SUCCESS) Process: 10967 ExecStartPre=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -t -q -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=126) Main PID: 3984 (code=exited, status=0/SUCCESS) CGroup: /system.slice/nginx.service ...but when I run /usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -t -q -g "pid /run/nginx.pid; daemon on; master_process on;" and then /usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g "pid /run/nginx.pid; daemon on; master_process on;" as root, all it does is return a warning, but works just fine: nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 Why is it doing that?

    Read the article

  • Instantiating COM object hnetcfg.fwpolicy2 on Remote Server

    - by Pavan Keerthi
    I locked my self out by inadvertently changing RDP firewall rule to use IPSec,but without completing proper steps to setup IPSec channel from my laptop to server. Luckily all wmi remoting on Server works,So I am trying to edit the rule with Powershell When I enter below code ,the COM object is invoking on local machine.How can I invoke it on remote machine? Enter-PSSession $Session $fw = New-Object -ComObject hnetcfg.fwpolicy2

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • Have only read access to Samba

    - by Tahir Malik
    Hi I've been struggling a lot with Samba on Centos 5.5 lately. I develop in Windows 7 and send files through scp (ant task), but it's to slow and wanted to setup thoroughly samba. After installing and following some guides I've done the following: Disable firewall (iptables) Disable SelLinux (didn't do that at the start, but didn't help either) setup my smbusers file to map my windows user to root (root = "Tahir Malik" -- works) added a current user mitco to the sambapassdb with the command smbpasswd -a mitco , because the windows user had only read access So both the users have read access to my share. Here is my smb.conf snippit: [global] workgroup = MITCO server string = Samba Server Version %v netbios name = centos ; interfaces = lo eth0 192.168.12.2/24 192.168.13.2/24 ; hosts allow = 127. 192.168.12. 192.168.13. [alf4] comment = Alfresco 4 path = /opt read only = no valid users = mitco, mitco force user = root force group = root admin users = mitco , mitco writeable = yes ; browseable = yes What also maybe important is that the /opt is only writable by root, but that shouldn't matter because I use the force user and group or admin users. The log file : [2012/09/29 07:43:44, 0] smbd/server.c:main(958) smbd version 3.0.33-3.39.el5_8 started. Copyright Andrew Tridgell and the Samba Team 1992-2008 [2012/09/29 07:43:59, 1] smbd/service.c:make_connection_snum(1085) mitco-tahir (192.168.13.1) connect to service alf4 initially as user root (uid=0, gid=0) (pid 5228)

    Read the article

  • Ubuntu karmic, chroot, Ubuntu server 8.04 LTS and Plesk

    - by morpheous
    I am pretty new to the Linux environment, and although I have been using Ubuntu Karmic desktop for development work, I have always prefered the GUI tools and very rarely use the terminal. I am about to launch a site (a VPS) which will be running Ubuntu server 8.04 LTS and Plesk. I want to install both the server (8.04 LTS) and Plesk in a 'chroot' on my Karmic desktop. I also want to test install some packages I have written for the server, and generally familiarize myself with the environment before I signup to the hosting service. I will be running a streamline server (no GUI), with only the following software: PHP Python mySQL PostgreSQL Apache Plesk Third party packages I developed In terms of mail, I think the hosting provider provides a mail service, so I dont know if I will need to run my own mail server. I will like some advice on the following: How can I install Ubuntu 8.0.4 server LTS in a chroot? How can I install Plesk in Ubuntu 8.0.4 server LTS (from the command line) Can anyone recommend how I back up the remote server when I go live (i.e. what tools to use)?. I will need to back up both databases, as well as some config data and installation packages

    Read the article

  • XP Pro freezes on welcome screen

    - by Peter B
    I have a problem that sounds much the same as http://superuser.com/questions/83101/how-to-diagnose-a-freeze-on-startup-in-windows-xp About every 2nd or 3rd boot, XP-Pro freezes on the Welcome page (the one showing the user name icons). The mouse moves the cursor OK, but clicking on an icon does nothing, and neither does any keystroke. If you press too many keys there is a beep and after that the mouse won't move the cursor anymore. The work-around is always to reboot into Safe mode and request a CHKDSK /R After this, the next boot is fine. There are no related entries in the Event log when the problem occurs. The only two entries are: "Microsoft (R) Windows (R) 5.01. 2600 Service Pack 3 Multiprocessor Free." "The Event log service was started." Update 1: Many thanks for that but I found no problems with any diagnostics I have run. But, I have managed to locate, and code round, the source of the problem - which is with whatever processing goes on behind the XP Welcome (as opposed to "classic") login screen. Unfortunately, having classic login means you don't get the useful Fast User Switching (FUS) login/switch mode. So, to retain this, my fix is: Add a Windows shutdown script (using gpedit.msc) to force "classic" mode for the first logon after next XP startup, by running: reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v LogonType /t REG_DWORD /d 0 /f Add a Scheduled Task to run at User Logon that enables Welcome screen (and hence FUS) by running the same command with 1 instead of 0 after the /d flag. The task is run as a privileged user (who can run "reg add").

    Read the article

  • RHEL 5.2 installing on ProLiant BL460c - hangs at 'now booting the kernel'

    - by Dr Rocket Mr Socket
    As the title states, I have a problem. This server and the installation disc are also on the other side of the world to me so... So far, I have tried to start the install with the parameters: linux text noapic noacpi no=apic no=acpi which results in the same hang. I have also disabled a PCI ethernet adapter, I am uneasy about disabling the onboard ethernet adapter I do not know if ILO uses this. Anyone have any advice? Much appreciated. EDIT: full output after trying to begin the installation. boot: linux text Loading initrd.img.................. Loading vmlinuz....... Uncompressing Linux...done. Now booting the kernel stays on this for hours EDIT2: adding the 'mem=40960M' (server has 40 gigs of ram) parameter allows it to proceed but the following output directly after 'Now booting the kernal' Memory: sized by int13 0e801h initrd extends beyond end of memory (0x00ef2090 > 0x00000000) disabling initrd Console: 16 point font, 400 scans Console: colour VGA+ 80x25, 1 virtual console (max 63) pcibios_init : BIOS32 Service Directory Structure at 0x000ffee0 pcibios_init : BIOS32 Service Directory entry at 0xf0000

    Read the article

  • Blue screen error code 1000008e

    - by Kas
    I'm getting blue screens, mostly when trying to boot a program that required a lot of memory (games, photo editing software.) So far I've only managed to catch one set of error codes: BCCode: 1000008e BCP1: C0000005 BCP2: ADA393BA BCP3: E9BCEBC4 BCP4: 00000000 OS Version: 6_0_6002 Service Pack: 2_0 Product: 768_1 It's on a Sony VAIO Laptop VGN FW-41E, Vista OS service pack 2. Besides these codes it lists two 'temporary' files that were related with this crash: ...AppData\Local\Temp\WER-134925-0.sysdata.xml ...AppData\Local\Temp\WERDA66.tmp.version.txt When I googled these files some site said it was linked to a worm called 'yodo', but virus scans don't return any results (hitman pro, malware bytes, avast antivirus all turn up empty). Upon further searching about this yodo worm, I came across security stronghold where someone posted they had acquired this worm when downloading access and excel templates. Now, I actually did download templates for the same programs, they might have been the same, they may be related or I might be grasping at straws here. I have not noticed any issues other in performance as of late, just BSOD's when I start software that requires some memory, but I never had issues with these exact same programs before. Help and/or hints are required on how to actually figure out what's the root of this BSOD issue and how can I fix it. Do you reckon it's actually a virus? What program should be able to remove YODO worm stuff?

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >