Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 227/785 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • How to assign permissions to ApplicationPoolIdentity account

    - by Triynko
    In IIS 7 on Windows Server 2008, application pools can be run as the "ApplicationPoolIdentity" account instead of the NetworkService account. How do I assign permissions to this "ApplicationPoolIdentity" account. It does not appear as a local user on the machine. It does not appear as a group anywhere. Nothing remotely like it appears anywhere. When I browse for local users, groups, and built-in accounts, it does not appear in the list, nor does anything similar appear in the list. What is going on? I'm not the only one with this problem: see Trouble with ApplicationPoolIdentity in IIS 7.5 + Windows 7 for an example. "This is unfortunately a limitation of the object picker on Windows Server 2008/Windows Vista - as several people have discovered it already, you can still manipulate the ACL for the app-pool identity using command line tools like icacls."

    Read the article

  • weird resolving of path to command

    - by Eldamir
    I have the terminal editor 'nano' installed in two places on my mac /usr/bin/nano /opt/local/bin/nano The installations are of different versions. the one in /usr does not support my configuration in ~/.nanorc and the one in /opt does. when i open a file with the command 'nano file', errors are displayed, indicating that the one in /usr was used, however, if i run 'which nano'; the one in /opt shows up. Isn't 'which' meant to search the path for the default? And why wouldn't a call to 'nano' resolve to the same path? EDIT: I made a work-around by adding the following line to ~/.profile alias nano='/opt/local/bin/nano'

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

  • Hear about Oracle Supply Chain at Pella, Apr 27-29 '10

    - by [email protected]
    Oracle Customer Showcase - Apr 27-29'10 Featuring Pella Corp. Delivering Greater Customer Value "Discovering the Lean Value Chain" Pella is once again hosting Oracle customers at a mega-reference event in Pella, Iowa, on April 27-29. The agenda features a cross-stack set of topics and issues, including strategies for delivering customer value, improving the customer experience, Value Chain Planning / Manufacturing / Enterprise Performance Management, and Lean practices. Several executives will keynote, including Pella CIO Steve Printz. The event includes a demo grounds, round-table discussion groups, plant tours, and networking opportunities. !  

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Remote Desktop doesn't recognize username change

    - by Unsigned
    There are two active user accounts on the Windows 7 Professional server, Owner, and Guest. Owner is an Administrator with a password. Guest is the default Guest account with no password, but has been added to Remote Desktop Users. When attempting to connect to the server via a Windows 7 Professional client, Guest accepts RD connections fine, however, Owner throws an error "Unable to connect to Local Security Authority." I created a new Administrator account, named Remote, with the same password as Owner. Remote Desktop worked perfectly. I then deleted Owner, and renamed Remote to Owner. Now, Remote Desktop gives the same error ("Unable to connect to Local Security Authority") when attempting to log into the new Owner. However, attempting to log into Remote (even though it was renamed to Owner), works. Completely at a loss here, what is going on? Why won't Owner work, and why does Remote Desktop still use the old name on the renamed account?

    Read the article

  • SharePoint 2010 not seeing Active Directory users

    - by user117927
    I'm pretty new to Active Directory and SharePoint but I was given to understand they are supposed to play well together. Now I have successfully set up AD with multiple user accounts that work on any member computer. I have also successfully installed SharePoint 2010 Server on an AD machine. Both the AD server and SharePoint servers are on separate machines (VMs running on ESXi to be precise). I can only log on with user accounts I create on the local server. Furthermore the user browser thing for adding users will only see local users. I've followed the advice here http://technet.microsoft.com/en-us/library/cc262350.aspx#section2 for Classic authentication and also NTLM claims based authentication but to no avail. Is there something fundamental I am getting wrong here? I'd be really thankful for any help you can lend me; I've been googling and scratching my head for a couple of days now. P

    Read the article

  • Our Server Rooted but exploit doesnt work?

    - by Salina Odelva
    Hi everyone. My friend's hosting server got rooted and we have traced some of attacker's commands.. We've found some exploits under /tmp/.idc directory.. We've disconnected the server and are now testing some local kernel exploits that the attacker tried on our server. Here is our kernel version: 2.4.21-4.ELsmp #1 SMP We think that he got root access by the modified uselib() local root exploit but the exploit doesn't work! loki@danaria {/tmp}# ./mail -l ./lib [+] SLAB cleanup child 1 VMAs 32768 The exploit hangs like this.. I've waited over 5 minutes but nothing has happened. I've also tried other exploits but they didn't work.. Any ideas? or experimentations with this exploit? Because we need to find the issue and patch our kernel but we can't understand how he used this exploit to get root... Thanks

    Read the article

  • Virtual DNS recommended setup...

    - by luison
    Hi. We are new to virtualization which we are setting up with Proxmox VE (OpenVZ + KVM). I am a bit lost about the recommended DNS forwarder config specially in the OpenVZ (Virtuosso type) of enviroiment. Our intention was to have a small dnsmasq running in one of the VM acting as backup DHCP server and serving our in-office local addresses (and PCs) by an additional resolve.conf file which dnsmasq supports, but I've read that all VM should share DNS pointing to the host machine in which case it would make more sense having it there. My problem is that I would like to have as least as possible apps in the host so a reinstall of the environment (porxmox ve) and a machine restore can be as quick as possible. Does anyone have a similar setup? Does it make sense to have the 1st virtual machine running the local dns forwarder? Also... dnsmasq seems to want to have root permissions when running on an OpenVZ container... are there any work arrounds anyone knows for that.

    Read the article

  • SharePointBeginners: A new group for a global noob community

    - by PeterBrunone
    Recently, a discussion broke out (go figure) on a SharePoint list that I frequent.  It had grown in size to the point where the more advanced members were sometimes turned off by the volume of questions that appeared TOO simple.  This happens all the time, as something becomes larger and specialization is necessary.Anyhoo, my response was to create the SharePoint Beginners group.  Come out and join us at http://groups.google.com/group/sharepointbeginners , where no question is too simple; all we ask is to show us that you tried to find the answer.

    Read the article

  • Can't connect to smtp (postfix, dovecot) after making a change and trying to change it back

    - by UberBrainChild
    I am using postfix and dovecot along with zpanel and I tried enabling SSL and then turned it off as I did not have SSL configured yet and I realized it was a bit stupid at the time. I am using CentOS 6.4. I get the following error in the mail log. (I changed my host name to "myhostname" and my domain to "mydomain.com") Oct 20 01:49:06 myhostname postfix/smtpd[4714]: connect from mydomain.com[127.0.0.1] Oct 20 01:49:16 myhostname postfix/smtpd[4714]: fatal: no SASL authentication mechanisms Oct 20 01:49:17 myhostname postfix/master[4708]: warning: process /usr/libexec/postfix/smtpd pid 4714 exit status 1 Oct 20 01:49:17 amyhostname postfix/master[4708]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling Reading on forums and similar questions I figured it was just a service that was not running or installed. However I can see that saslauthd is currently up and running on my system and restarting it does not help. Here is my postfix master.cf # # Postfix master process configuration file. For details on the format # of the file, see the Postfix master(5) manual page. # # ***** Unused items removed ***** # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd # -o content_filter=smtp-amavis:127.0.0.1:10024 # -o receive_override_options=no_address_mappings pickup fifo n - n 60 1 pickup submission inet n - - - - smtpd -o content_filter= -o receive_override_options=no_header_body_checks cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap smtp unix - - n - - smtp smtps inet n - - - - smtpd # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # ==================================================================== maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/local/bin/maildrop -d ${recipient} uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=foo argv=/usr/local/sbin/bsmtp -f $sender $nexthop $recipient # # spam/virus section # smtp-amavis unix - - y - 2 smtp -o smtp_data_done_timeout=1200 -o disable_dns_lookups=yes -o smtp_send_xforward_command=yes 127.0.0.1:10025 inet n - y - - smtpd -o content_filter= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks=127.0.0.0/8 -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o receive_override_options=no_header_body_checks -o smtpd_bind_address=127.0.0.1 -o smtpd_helo_required=no -o smtpd_client_restrictions= -o smtpd_restriction_classes= -o disable_vrfy_command=no -o strict_rfc821_envelopes=yes # # Dovecot LDA dovecot unix - n n - - pipe flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient} # # Vacation mail vacation unix - n n - - pipe flags=Rq user=vacation argv=/var/spool/vacation/vacation.pl -f ${sender} -- ${recipient} And here is dovecot ## ## Dovecot config file ## listen = * disable_plaintext_auth = no protocols = imap pop3 lmtp sieve auth_mechanisms = plain login passdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } userdb { driver = sql } userdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } mail_location = maildir:/var/zpanel/vmail/%d/%n first_valid_uid = 101 #last_valid_uid = 0 first_valid_gid = 12 #last_valid_gid = 0 #mail_plugins = mailbox_idle_check_interval = 30 secs maildir_copy_with_hardlinks = yes service imap-login { inet_listener imap { port = 143 } } service pop3-login { inet_listener pop3 { port = 110 } } service lmtp { unix_listener lmtp { #mode = 0666 } } service imap { vsz_limit = 256M } service pop3 { } service auth { unix_listener auth-userdb { mode = 0666 user = vmail group = mail } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0666 user = postfix group = postfix } } service auth-worker { } service dict { unix_listener dict { mode = 0666 user = vmail group = mail } } service managesieve-login { inet_listener sieve { port = 4190 } service_count = 1 process_min_avail = 0 vsz_limit = 64M } service managesieve { } lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes protocol lda { mail_plugins = quota sieve postmaster_address = [email protected] } protocol imap { mail_plugins = quota imap_quota trash imap_client_workarounds = delay-newmail } lmtp_save_to_detail_mailbox = yes protocol lmtp { mail_plugins = quota sieve } protocol pop3 { mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol sieve { managesieve_max_line_length = 65536 managesieve_implementation_string = Dovecot Pigeonhole managesieve_max_compile_errors = 5 } dict { quotadict = mysql:/etc/zpanel/configs/dovecot2/dovecot-dict-quota.conf } plugin { # quota = dict:User quota::proxy::quotadict quota = maildir:User quota acl = vfile:/etc/dovecot/acls trash = /etc/zpanel/configs/dovecot2/dovecot-trash.conf sieve_global_path = /var/zpanel/sieve/globalfilter.sieve sieve = ~/dovecot.sieve sieve_dir = ~/sieve sieve_global_dir = /var/zpanel/sieve/ #sieve_extensions = +notify +imapflags sieve_max_script_size = 1M #sieve_max_actions = 32 #sieve_max_redirects = 4 } log_path = /var/log/dovecot.log info_log_path = /var/log/dovecot-info.log debug_log_path = /var/log/dovecot-debug.log mail_debug=yes ssl = no Does anyone have any ideas or tips on what I can try to get this working? Thanks for all the help EDIT: Output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 delay_warning_time = 4 disable_vrfy_command = yes html_directory = no inet_interfaces = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = localhost.$mydomain, localhost mydomain = control.yourdomain.com myhostname = control.yourdomain.com mynetworks = all newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.2.2/README_FILES recipient_delimiter = + relay_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-relay_domains_maps.cf sample_directory = /usr/share/doc/postfix-2.2.2/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_use_tls = no smtpd_client_restrictions = smtpd_data_restrictions = reject_unauth_pipelining smtpd_helo_required = yes smtpd_helo_restrictions = smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_recipient_domain smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = smtpd_use_tls = no soft_bounce = yes transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 550 virtual_alias_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_alias_maps.cf, regexp:/etc/zpanel/configs/postfix/virtual_regexp virtual_gid_maps = static:12 virtual_mailbox_base = /var/zpanel/vmail virtual_mailbox_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_mailbox_maps.cf virtual_minimum_uid = 101 virtual_transport = dovecot virtual_uid_maps = static:101

    Read the article

  • Database Replication check script not running

    - by Tarun
    I'm trying to create a Database Replication checking script but I'm getting error while executing it. Here is the script #!/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin export PATH #Server Name Server="Test Server" #My Sql Username and Password User=root Password="a" #Maximum Slave Time Delay Delay="60" #File Path to store error and email the same Log_File=/tmp/replicationcheck.txt #Email Settings Subject="$Server Replication Error" Sender_Name=TestServer Recipients="[email protected]" #Mail Alert Function mailalert(){ sendmail -F $Sender_Name -it <<END_MESSAGE To: $Recipients Subject: $Subject $Message_Replication_Error `/bin/cat $Log_File` END_MESSAGE } #Show Slave Status Show_Slave_Status=`echo "show slave status \G;" | mysql -u $User -p$Password 2>&1` #Getting list of queries in mysql $Show_Slave_Status | grep "Last_" > $Log_File #Check if slave running $Show_Slave_Status | grep "Slave_IO_Running: No" if [ "$?" -eq "0" ]; then Message_Replication_Error="$Server Replication error please check. The Slave_IO_Running state is No." mailalert exit 1 else $Show_Slave_Status | grep "Slave_IO_Running: Connecting" if [ "$?" -eq "0" ]; then Message_Replication_Error="$Server Replication error please check. The Slave_IO_Running state is Connecting." mailalert exit 1 fi fi #Check if replication delayed Seconds_Behind_Master=$Show_Slave_Status | grep "Seconds_Behind_Master" | awk -F": " {' print $2 '} if [ "$Seconds_Behind_Master" -ge "$Delay" ]; then Message_Replication_Error="Replication Delayed by $Seconds_Behind_Master." mailalert else if [ "$Seconds_Behind_Master" = "NULL" ]; then Message_Replication_Error="$Server Replication error please check. The Seconds_Behind_Master state is NULL." mailalert fi fi

    Read the article

  • How to convert laptop drive for use as VMware image?

    - by jnman
    I have a windows laptop that recently died (dead motherboard). It being a 7 year old laptop, I decided to give Apple a try this time around and try to use VMware to access my old data if necessary. In order to do this, I need to convert the physical drive to a VMware image. Googling around, it looks like I might be able to use VMware Convertor to do this. My original intent was to plug the laptop drive into a windows desktop via an external USB enclosure and create the image that way. However, upon further investigation, it looks like VMware Converter only supports converting a local machine (the desktop) or a remote machine (via IP) but not a laptop drive plugged into the local machine. So with that in mind, I'm looking for suggestions and help on how to convert this laptop drive into something I can use on my new Macbook Pro.

    Read the article

  • MySQL Workbench sends computer name with login not IP

    - by Android Addict
    I am attempting to connect MySQLWorkbench to a remote MySQL Server. The server has granted access to user@IPAddress However, when I try to connect MySQLWorkbench, it sends user@computername instead. How do I configure the connection to use the IP address instead in MySQLWorkbench? Reference: The remote server is on the local network, so I need to use the local IP address assigned to my client. EDIT What I have tried so far: from the server: mysql -u user@IPAddress -p --host=(ServerIPAddress) Returns: mysql> So that tells me the user account is operational. Furthermore, I confirmed it exists using: select user from mysql.user; returning a table of all users, of which the user I am using is present. I have also opened the port 3306: sbin/iptables -A INPUT -i eth0 -s clientIPAddress -p tcp --destination-port3306 -j ACCEPT Still I encounter Access Denied

    Read the article

  • how to diagnosis and resolve: /usr/lib64/libz.so.1: no version information available

    - by matchew
    I had a hell of a time installing lxml for python2.7 on centOs5.6. For some background, python2.7 is an alternative installation of python on centOS5.6 which comes with python2.4 installed. it was bulit from source per its instrucitons ./configure make make altinstall However, after about 20 hours of trying I managed to find a workable solution and was able to install lxml. Until, I notice the following error at the top of the interpreter: python2.7: /usr/lib64/libz.so.1: no version information available (required by python2.7) Python 2.7.2 (default, Jun 30 2011, 18:55:26) [GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print 'Sheeeeut!' this error is printed out everytime I run a script. For example: $ ./test.py /usr/local/bin/python2.7: /usr/lib64/libz.so.1: no version information available (required by /usr/local/bin/python2.7) the script runs flawlessly, but this error is bothersome. After some digging I have seem to believe I have a wrong version of libz installed, that it is either an older version or built for a different platform. I'm not quite sure how, I've only installed libz through yum, as far as I know. Although, I can't quite remember every little thing I tried in my twenty hours of trying. You may also be intereted in what my lib64 folder looks like, here is some information $ ls -ltrh libz* -rwxr-xr-x 1 root root 84K Jan 9 2007 libz.so.1.2.3 -rwxr-xr-x 1 root root 107K Jan 9 2007 libz.a -rwxr-xr-x 1 root root 154K Feb 22 23:30 libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Apr 20 20:46 libz.so.1 -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jun 30 18:43 libzdb.so.7 -> libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Jul 1 11:35 libz.so -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jul 1 11:35 libzdb.so -> libzdb.so.7.0.2 notice: the items that Say Jul 1st or Jun 30th are from me. I had initially moved these files into a backup folder as they seeemed to be 1. duplicates and 2. had a date after/during my problems I alluded to earlier that I had with lxml One inclination is to completely remove python2.7 and re-install. I think having it install to /usr/local/ was a poor default choice. However, without the make uninstall option being present it seems to be a time consuming task for a solution I am not quite sure would solve my problem.

    Read the article

  • Trouble joining Windows Server 2008 to Domain

    - by Jim R
    When I try to join my new server to my existing domain I get the following error: "An attempt to resolve the DNS name of a DC in the domain being joined has failed. Please verify this client is configured to reach a DNS server that can resove DNS names in the target domain." I have tried all of the following already: Successfully pinged the domain controller. Ping the new server from the domain controller by IP address and by DNS name. Ping the DC server from the new server by IP address and by DNS name. Changed the network to DHCP (it was originally static). No joy as static or DHCP. Turned off all firewall settings. Added the domain name to 'hosts' file. Added the server name of the primary domain controller to the 'hosts' file in the new server. Any ideas? Thanks in advance for any help! Jim Update: With help from J. Brian Kelly (Thanks) I have managed to narrow down the problem to a DNS issue. Specifically, UDP/53 packets are being sent (they are seen in Network Monitor), but are not getting to the DNS server. But, I do not yet know why. Update: The quested output from IPCONFIG for the HyperV host and the virtual machine. IPCONFIG from HyperV Server Windows IP Configuration Host Name . . . . . . . . . . . . : HYPER Primary Dns Suffix . . . . . . . : sfi-wfc.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : sfi-wfc.com Ethernet adapter Local Area Connection 4: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Primary Network Physical Address. . . . . . . . . : 00-30-48-CA-CC-7A DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::cd16:3ac2:3d4f:e275%679(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.100.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.100.10 DHCPv6 IAID . . . . . . . . . . . : -1476382648 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-12-10-20-E9-00-30-48-CA-CC-7A DNS Servers . . . . . . . . . . . : 192.168.100.5 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection 3: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : Intel(R) 82576 Gigabit Dual Port Network Connection #2 Physical Address. . . . . . . . . : 00-30-48-CA-CC-7B DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPCONFIG from Virtual Machine Windows IP Configuration Host Name . . . . . . . . . . . . : DB Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : sfi Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter Physical Address. . . . . . . . . : 00-15-5D-66-03-02 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.100.128(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Saturday, August 29, 2009 10:44:45 AM Lease Expires . . . . . . . . . . : Tuesday, September 01, 2009 3:08:33 PM Default Gateway . . . . . . . . . : 192.168.100.10 DHCP Server . . . . . . . . . . . : 192.168.100.5 DNS Servers . . . . . . . . . . . : 192.168.102.5 Primary WINS Server . . . . . . . : 192.168.100.5 NetBIOS over Tcpip. . . . . . . . : Enabled Tunnel adapter Local Area Connection* 8: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : isatap.sfi Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Local Area Connection* 9: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 02-00-54-55-4E-01 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes

    Read the article

  • Claims-based Identity in .NET 4.5 and Windows 8

    - by Your DisplayName here!
    There was not a ton of new information about WIF and related technologies at Build, but Samuel Devasahayam did a great talk about claims-based access control that contained some very interesting bits of information with regards to future directions. From his slides: Windows 8 Bring existing identity claims model into the Windows platform Domain controller issues groups & claims Claims (user and device) sourced from identity attributes in AD Claims delivered in Kerberos PAC NT Token has a new claims section Enhanced SDDL API’s to work with claims Enhanced user mode CheckAccess API’s to work with claims New ACL-UX Target audits with claims-based expressions WIF & .NET 4.5 WIF is in the box with .NET Framework 4.5 Every principal in .NET 4.5 is a ClaimsPrincipal ADFS 2.1 ADFS 2.1 is available now as a in-box server role in Windows 8 Adds support for issuing device claims from Kerberos ticket

    Read the article

  • How to enabled pdo mysql in Centos 5.8

    - by nacho3d
    I have a VPS with Centos 5.8 In phpinfo displays: './configure' '--disable-fileinfo' '--disable-pdo' '--enable-bcmath' '--enable-calendar' '--enable-ftp' '--enable-libxml' '--enable-magic-quotes' '--enable-sockets' '--prefix=/usr/local' '--with-apxs2=/usr/local/apache/bin/apxs' '--with-curl=/opt/curlssl/' '--with-imap=/opt/php_with_imap_client/' '--with-imap-ssl=/usr' '--with-kerberos' '--with-libdir=lib64' '--with-libxml-dir=/opt/xml2/' '--with-mysql=/usr' '--with-mysql-sock=/var/lib/mysql/mysql.sock' '--with-openssl=/usr' '--with-openssl-dir=/usr' '--with-pcre-regex=/opt/pcre' '--with-pic' '--with-zlib' '--with-zlib-dir=/usr' I've tried this: http://www.host1free.com/forum/vps-technical-support/7248-tutoria-centos-apache-webserver-mysql-php-eaccelerator-apc.html And aparently it installed php-pdo # rpm -qa |grep php php-5.3.13-1.el5.remi php-xml-5.3.13-1.el5.remi php-common-5.3.13-1.el5.remi php-cli-5.3.13-1.el5.remi php-pdo-5.3.13-1.el5.remi php-xmlrpc-5.3.13-1.el5.remi php-mcrypt-5.3.13-1.el5.remi But I've restarted apache and it still says in my phpinfo: '--disable-pdo' Should I rebuild php? Do I need to do some other step?

    Read the article

  • Day 1 of Oracle OpenWorld 2012 September 30

    - by Maria Colgan
    Howard Street in San Francisco is closed. The large Oracle tent is up! Attendees are arriving by the plane load at SFO. It can only mean one thing .... That's right!  Oracle OpenWorld officially starts today with the Oracle Users Forum. Ton's of great technical sessions selected by the Oracle User Groups get under way this morning at 8 am (Doh!). And of course, Larry's keynote is this evening 5:00 pm–7:00 pm, Moscone North. A must see, as he is bound to make some exciting announcements to get the show started! Hope to see ya there!   +Maria Colgan  

    Read the article

  • Add a remote printer over ssh on OSX?

    - by GradGuy
    I have a printer at my office that is connected to a local network and my linux box at work can see it on the network. However, it is not visible to the outside world. I was trying to figure out a way to add it on my MacAir and so far have found two options: 1) Using ssh tunnel via CLI: cat file.pdf | ssh user@linuxbox lpr. 2) With Chrome installed on the linux box, using the Google Cloud Print service on the remote box and automator on my MacAir I can add the printer to Cmnd+p dialog box I like the first method since it does not require Chrome be installed and the second one since it allows to use Cmnd+p inside all applications. I was wondering if there is a way to combine by using automator to run the first command line script. What about port forwarding? Is it possible to forward the remote CUPS 631 port to a local port and then add the printer normally? What other methods would you recommend?

    Read the article

  • KERPOOOOW!

    - by Matt Christian
    Recently I discovered the colorful world of comic books.  In the past I've read comics a few times but never really got into them.  When I wanted to start a collection I decided either video games or comics yet stayed away from comics because I am less familiar with them. In any case, I stopped by my local comic shop and picked up a few comics and a few trade paperbacks.  After reading them and understanding their basic flow I began to enjoy not only the stories but the art styles hiding behind those little white bubbles of text (well, they're USUALLY white).  My first stop at the comic store I ended up with: - Nemesis #1 (cover A) - Shuddertown #1 (cover A I think) - Daredevil: King of Hell's Kitchen Trade Paperback - Peter Parker: Spiderman - One Small Break Trade Paperback It took me about 3-4 days to read all of that including re-reading the single issues and glancing over the beginning of Daredevil again.  After a week of looking around online I knew a little more about the comics I wanted to pick up and the kind of art style I enjoyed.  While Peter Parker: Spiderman was ok, I really enjoyed the detailed, realistic look of Daredevil and Shuddertown. Now, a few years back I picked up the game The Darkness for PS3.  I knew it was based off a comic but never read the comic.  I decided I'd pick up a few issues of it and ended up with: - The Darkness #80 (cover A) - The Darkness #81 (cover A) - The Darkness #82 (cover A) - The Darkness #83 (cover A) - The Darkness Shadows and Flame #1  (one-shot; cover A) - The Darkness Origins: Volume 1 Trade Paperback (contains The Darkness #1-6) - New Age boards and bags for storing my comics The Darkness is relatively good though jumping from issue #6 to issue #80 I lost a bit on who the enemy in the current series is.  I think out of all of them, issue #83 was my favorite of them. I'm signed up at the local shop to continue getting Nemesis, The Darkness, and Shuddertown, and I'll probably pick up a few different ones this weekend...

    Read the article

  • Squirrelmail got error after installation whm

    - by Pleerock
    Just now installed whm/cpanel, created some accounts, created mail account in one of them and entered squirrelmail to check the mail. Unfortunately it gives me an error: Warning: fsockopen() [function.fsockopen]: php_network_getaddresses: getaddrinfo failed: Name or service not known in /usr/local/cpanel/base/3rdparty/squirrelmail/plugins/login_auth/functions.php on line 129 Warning: fsockopen() [function.fsockopen]: unable to connect to localhost:143 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /usr/local/cpanel/base/3rdparty/squirrelmail/plugins/login_auth/functions.php on line 129 How do I fix it? I don't know exactly, but I read some sites and maybe the problem in dns? I changed ns1.com to ns1.myhost.com in a Basic cPanel & WHM Setup P.S. Im sure that it server configuration problem, not squirrelmail, other mail clients are not working too..

    Read the article

  • Imitating Exchange Server's "RBAC AuthZ" in my own application... (is there something similar?)

    - by makerofthings7
    Exchange 2010 has a delegation model where groups of winrm cmdlets are essentally grouped into roles, and the roles assigned to a user. (Image source) This is a great & flexible model considering how I can leverage all the benefits of PowerShell, while using the right low level technologies (WCF, SOAP etc), and requiring no additional software on the client side. (Image source) Question(s) Is there a way for me to leverage Exchange's delegation model in my .NET application? Has anyone attempted to imitate this model? If I must start from scratch, how would I go about imitating this approach?

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Sharepoint Server 2007 generates event log entry every 5 minutes - "The SSP Timer Job Distribution L

    - by Teevus
    I get the following error logged into the Event Log every 5 minutes: The SSP Timer Job Distribution List Import Job was not run. Reason: Logon failure: the user has not been granted the requested logon type at this computer In addition, OWSTimer.exe periodically gets into a state where its consuming almost all the CPU and only killing the process or restarting the Sharepoint services fixes it (although I'm not sure if this is a related or seperate issue). I have tried the following (based on various suggestions floating around the web), all to no avail: iisreset (no affect) Added the Sharepoint and Sharepoint Search service accounts to Log on as a batch job and Log on as a service policies in the Group Policies for the domain. I went into the Local Computer Policy on the Sharepoint server and verified that those policies had actually been applied Verified that the Sharepoint and Sharepoint Search service accounts are both in the WSS_WPG group Verified in dcomcnfg that the WSS_WPG group (and indeed the Sharepoint and Sharepoint search service accounts) has local activation rights for SPSearch. Any more suggestions would be valued. Thanks

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >