Search Results

Search found 24177 results on 968 pages for 'true'.

Page 671/968 | < Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Firefox won't start

    - by Daniel R Hicks
    OK, I've got this problem again, only this time the problem only seems to affect Firefox and Thunderbird. Rebooted several times. Tried resetting to the last restore point, but that didn't work. Tried setting a new Firefox profile, and that didn't work either. The symptom is that you click on the Firefox or Thunderbird icon, the process appears in the Process Explorer list, but the window never opens. Curiously, if Firefox has been "started" this way, Internet Explorer hangs starting until I kill the Firefox process. Any ideas? I suppose the next thing to try is uninstalling and reinstalling Firefox/Thunderbird, but this whole thing is getting old. The box is a Sony Vaio running Windows Vista. It was completely restored from scratch less than two weeks ago, after the last fiasco. (I'm suspecting that my aborted install of Acronis True Image may have mucked things up this time.) Sigh! Another symptom: It occurred to me to try printing something, but if I open "Printers" it just sits there "searching". So something is rotten in the bowels of Windows. Minor update: It occurred to me to kill Internet Explorer (where I'd attempted printing). Then Printers comes up fairly quickly -- with no printers defined. Clicking "Add a printer" does nothing. Update: Well, following this suggestion to stop and restart the print spooler brought the printers back. And, wonder of wonders, Firefox now starts OK. Stopping and restarting the print spooler!!

    Read the article

  • Why does Google Analytics use two domains?

    - by AKeller
    I'm building a distributed widget that is comparable to Google Analytics. Users will add a <script> tag to their site that references my widget's JavaScript file. The Google Analytics tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXXXX-X']); _gaq.push(['_trackPageview']); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Can anyone explain the reasoning behind separate HTTP and HTTPS hostnames? My instinct is to just secure the www address and then use the protocol-less syntax, like //www.google-analytics.com/ga.js. But I'm sure the Google Analytics architects put a lot of thought into this approach. I'd love to understand their logic before I follow/ignore their model.

    Read the article

  • Some doubts about the use of usermod and groupmod command

    - by AndreaNobili
    I am not yet a true "Linux guy" and I have the following doubts about how exactly do the following shell procedure (a list of commands steps) founded in a tutorial that I am following (I want deeply understand what I am doing before do it): sudo passwd root then login again as root usermod -l miner pi usermod -m -d /home/miner miner groupmod -n miner pi exit So at the beginning it enable the root account and I have to login again in the system as root...this is perfectly clear for me. And now I have the followings doubts: 1) The usermod command: usermod -l miner pi usermod -m -d /home/miner miner Reading the official documentation of the usermod command I understand that this command modify the informations related to an existing account Reading the documentation it seems to me that the -l parmether modify the name of the user pi in miner and then the -m -d paramether move the contents of the old home directory to the new one (named miner) and use this new directory as home directory My doubt is: what exactly do the executions of these operation? I think that: Rename the existing pi user in miner Then move the content of the old home directory (the pi home directory? or what?) into a new directory (/home/miner) that now is the home directory for the miner user. Is it right? The the second doubt is related to this command groupmod -n miner pi It seems to me that change the group name from pi in miner But what exactly is a group in Linux and why is it used? Tnx

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • Trying to Set up SMTP Server on WIndows Server 2012

    - by datc
    I'm working on a website, and I need to test the functionality of sending email messages from ASP.NET, something like this: Dim msg As New MailMessage("email1", "email2") msg.Subject = "Subject"<br> msg.IsBodyHtml = True<br> msg.Body = "Click <a href='site'>here</a>." Dim client As SmtpClient = New SmtpClient() client.Host = "My-Server"<br> client.Port = 25<br> client.DeliveryMethod = SmtpDeliveryMethod.Network<br> client.Send(msg) This is running from a Windows 8 workstation. I've installed SMTP server on my Windows Server 2012 machine. The mail shows up in the mailroot/Queue folder and sits there, eventually getting deposited into Badmail. Now I have AT&T U-verse at home, and a few devices connected to the gateway, including let's call it "My-Server." When I run SmtpDiag from say, datc@... to [email protected] I get SOA serial number match passed, Local DNS (99-135-60-233.lightspeed.bcvloh.sbcglobal.net) & Remote DNS (hotmail.com) tests *not* passed, and ultimately, Connecting to the server failed. Error: 10060. Failed to submit mail to mx2.hotmail.com error. When I set My-Server's IP to static and equal to the external IP, 99.135.60.233, and again run SmtpDiag, I get SOA, Local DNS, and Remote DNS tests passed, but the same 10060 error. Same for yahoo.com, gmail.com, and so forth. Is it my ISP's job to fix this? Some PTR record missing somewhere? Is it at all possible to have a home-based SMTP server? All I want is to test my email code. Perhaps, my IP address is just not "trusted" somehow. Thanks.

    Read the article

  • Shortcuts located in "D:\Program Data\..." not working even though they're pointing to the right targets (Windows 7)

    - by Kevin
    I just made a fresh install of my windows 7 home premium using my laptop's recovery disks (HP Pavilion dv6-2151cl) using minimal settings. After install, I set up "Program Data" and "Users" to my D partition to save space changing the folders in the registry. Then I updated windows (including W7 SP1), and installed all other programs. After installing all other programs I noticed that the icons of all new programs (not included in the windows install) in "All Programs" had a blank sheet as icon and they don't do anything. Looked into "D:\Program Data\Microsoft\Windows\Start Menu\Programs" in the windows explorer and the same is true there. All the shortcuts in C: and "D:\Users..." work both in the "Windows Explorer" and "All Programs". Also I noticed that the shortcuts do display the right icons inside the "open" dialog boxes. And if I copy the shortcuts in "D:\Program Data..." to the desktop they also work as expected. I checked file association for .lnk and it was OK, but also tried the registry fixers for this file association and they had no effect. There are no missing programs that I can tell in the "All Programs" menu, the just don't do anything if they lay in "D:\Program Data...". Any thoughts on how to make Windows 7 treat shortcuts in "D:\Program Data..." as they should?

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • How to resume XMPP groupchat window in Irssi (using bitlbee)?

    - by mcnesium
    I use Bitlbee to chat in XMPP-networks within my IRC-client Irssi. This works great so far, and recently I started using XMPP Multi User Chats as an alternative to IRC-channels. I set up a channel using chat add <account> <[email protected]> in the &bitlbee control window, set chan <room> set autojoin true and entered /join #room in the &bitlbee window to join that groupchat. It then appears as a unique Irssi window in the status bar. This seems to work ok too, but with one exception: Since I idle in the channels 24/7 my irssi has to cope with the every-night-24h-DSL-disconnection by the ISP. After it automatically reconnects, it does kind of rejoin that XMPP-groupchat, but the traffic of the groupchat does not go back to the unique irssi window, but keeps flooding &bitlbee with messages from root telling me about a Groupchat Message from unknown JID <jid>: <message> - which is the traffic of the groupchat. The unique groupchat window is gone after the reconnect, and I will again have to go /join #room in &bitlbee to get it back. Even worse, the window number is unused before I rejoin the groupchat, and if I get a query from any network, the window nests in that unused window spot, so I will first have to remove that query from the spot, and then move the rejoined groupchat to that window number. I want my groupchat window to resume after the reconnect just like every other IRC channel too. How can I get this done? Any ideas?

    Read the article

  • Verification of downloaded package with rpm

    - by moooeeeep
    I wanted to install a package on CentOS 6 via rpm (e.g., the current epel-release). EDIT: Of course I would always prefer the installation via yum but somehow I failed to get that specific package installed using this normal approach. As such, the EPEL/FAQ recommends Version 2. As I'm downloading the package through an insecure channel (http) I wanted to make sure that the integrity of the file is verified using information that is not provided with the downloaded file itself. Is this especially true for all of these approaches? I've seen various approaches to this on the internet: Version 1 rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 2 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 3 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm rpm --import https://fedoraproject.org/static/0608B895.txt rpm -K epel-release-6-7.noarch.rpm rpm -i epel-release-6-7.noarch.rpm I do not know rpm very well, so I wondered how they might differ? My guess (after reading the manpage) is that the first should only be used when the package is previously not installed, the second would additionally remove previous versions of the package after installation, the first two omit some verification steps before the actual installation that are done by rpm -K. So my main questions at this point are Are my guesses correct or am I missing something? Is the rpm --import ... implicitly done for the first two approaches as well, and if not, isn't it necessary to do so after all? Are these additional checks performed by rpm -K ... any relevant? What is the best (most secure, most reliable, most maintainable, ...) way of installing packages via rpm in general?

    Read the article

  • error 503: Can't deploy rails 3 app with apache + thin (bitnamy ruby stack)

    - by Pacu
    As you'll notice, I'm a bit of a noob on Rails. Here's the thing I have a EC2 Bitnami RubyStack AMI running. I'm trying to deploy the sample project to be sure I'm doing the right thing, but I'm not getting anywhere at all. I just get a 503 error I'm following bitnami's docs on thin + apache Here are my files: the httpd.conf I include in the main httpd.conf Alias /sample "/home/bitnami/stack/projects/sample/public" <Directory "/home/bitnami/stack/projects/sample/public"> AllowOverride None Order allow,deny Allow from all </Directory> ProxyPass /sample balancer://appcluster ProxyPassReverse /sample balancer://appcluster <Proxy balancer://appcluster> BalancerMember http://127.0.0.1:3001/sample BalancerMember http://127.0.0.1:3002/sample BalancerMember http://127.0.0.1:3003/sample BalancerMember http://127.0.0.1:3004/sample </Proxy> the thin.yml file chdir: /opt/bitnami/projects/sample environment: production address: 127.0.0.1 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 512 require: [] wait: 30 servers: 5 prefix: /sample daemonize: true I'm able to start and stop apache, but thin does not stop correctly though. When I try to stop thin, I get this output /opt/bitnami/projects/sample$ sudo thin -C config/thin.yml stop Stopping server on 127.0.0.1:3000 ... Can't stop process, no PID found in tmp/pids/thin.3000.pid Stopping server on 127.0.0.1:3001 ... Can't stop process, no PID found in tmp/pids/thin.3001.pid Stopping server on 127.0.0.1:3002 ... Can't stop process, no PID found in tmp/pids/thin.3002.pid Stopping server on 127.0.0.1:3003 ... Can't stop process, no PID found in tmp/pids/thin.3003.pid Stopping server on 127.0.0.1:3004 ... Can't stop process, no PID found in tmp/pids/thin.3004.pid I've tried to use nginx as well, without any luck unfortunately. Thank you for your time and help!

    Read the article

  • Exporting only visible datagridview columns to excel

    - by Suresh E
    Need help on exporting only visible DataGridView columns to excel, I have this code for hiding columns in DataGridView. this.dg1.Columns[0].Visible = false; And then I have button click event for exporting to excel. // creating Excel Application Microsoft.Office.Interop.Excel._Application app = new Microsoft.Office.Interop.Excel._Application(); // creating new WorkBook within Excel application Microsoft.Office.Interop.Excel._Workbook workbook = app.Workbooks.Add(Type.Missing); // creating new Excelsheet in workbook Microsoft.Office.Interop.Excel._Worksheet worksheet = null; // see the excel sheet behind the program app.Visible = true; // get the reference of first sheet. By default its name is Sheet1. // store its reference to worksheet worksheet = workbook.Sheets["Sheet1"]; worksheet = workbook.ActiveSheet; // changing the name of active sheet worksheet.Name = "PIN korisnici"; // storing header part in Excel for (int i = 1; i < dg1.Columns.Count + 1; i++) { worksheet.Cells[1, i] = dg1.Columns[i - 1].HeaderText; } // storing Each row and column value to excel sheet for (int i = 0; i < dg1.Rows.Count - 1; i++) { for (int j = 0; j < dg1.Columns.Count; j++) { worksheet.Cells[i + 2, j + 1] = dg1.Rows[i].Cells[j].Value.ToString(); } } but I want to export only visible columns, while I get all of them, anyone, help on this.

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • What does the 'Burst Rate' stat mean in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks

    Read the article

  • #550 5.1.1 RESOLVER.ADR.ExRecipNotFound; not found ##

    - by gtaylor85
    I've searched serverfault and found this question pop-up quite a bit. Unfortunately others problems aren't exactly like mine, and because I'm a true beginner I wanted some more "specific to me" help. If you don't mind. I just set-up a new computer for a user. Copied over her auto-populate and archive emails. Her email, for the most part, works fine. But when she tries to send anything to [email protected] she gets the #550 5.1.1 error. If she uses the exchange webapp she does not have the issue. I can send email to BSMITH, and so can everyone else. The user, as far as I can tell from the EMC reports, is the only person having emails sent back to them and only from BSMITH. I have googled the crap out of this, and attempted some of the solutions to no avail. I've looked for the bmith account in the disabled accounts and copied and attempted to add "IMCEAEX-_O=CHILD+20STUDY+20CENTER_OU=FIRST+20ADMINISTRATIVE+20GROUP_CN=RECIPIENTS_CN=BSMITH@mydomain.com" as an X500 email. I honestly am just following instructions though and I don't really understand what it is I'm doing. Diagnostic information for administrators: Generating server: FS2.FS1D.local IMCEAEX-_O=CHILD+20STUDY+20CENTER_OU=FIRST+20ADMINISTRATIVE+20GROUP_CN=RECIPIENTS_CN=BSMITH@mydomain.com #550 5.1.1 RESOLVER.ADR.ExRecipNotFound; not found ## BTW I love this site and only found out about it a few weeks ago. My girlfriend now loves the photo.stackexchange. So thanks for such a helpful community.

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

  • Do I need a helper column, or can I do this with a formula?

    - by dwwilson66
    I am using this formula =IF((LEFT($B26,2)="<p"),0,IF($B26="",0,IF($F26<>"",0,(FIND("""../",$B26))))) To parse data similar to the following. <nobr>&nbsp;&nbsp;&nbsp;&nbsp;contractor information</nobr><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="../City_Electrical_Inspectors.htm"><b> City Electrical Inspectors</b></a><br> <nobr>&nbsp;&nbsp;&nbsp;&nbsp;<a href="../City_Electrical_Inspectors.htm"><b>inspection</b></a></nobr><br> My problem comes in cases such as the first line, in which the line is not a new paragraph nor a link, and my FIND returns an error of #VALUE! Id like to create an IF test to scan the line for the existence of the pattern in my FIND statement before processing that statement. I figured that looking for an error condition may be the way to go. However, the only way I can envision this is as a self-referential formula, similart to the following pseudocode. IF(ISERROR($L26)=TRUE,$L26=0,L$26=the-result-of-the-formula-above) Can this be done with a formula or do I need to use a new helper column? Thanks.

    Read the article

  • Host is missing hostname and/or domain

    - by anlawang
    i use puppet 0.25.4 on ubuntu 10.04,when puppet installed ,i got the infor below : Nov 29 10:30:30 puppet puppetmasterd[4422]: Host is missing hostname and/or domain: pclient.example.com Nov 29 10:30:30 puppet puppetmasterd[4422]: Compiled catalog for pclient.example.com in 0.02 seconds i dont know how to fix it ,who can help me thank you ! my configuration : I use apt-get to install the puppet,so some configuration have been fixed puppet.conf on client : > [main] server=puppet.example.com > logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=false > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post > certname=pclient.example.com > node_name=cert [puppetd] > runinterval=30 puppet.conf on server: > [main] logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=true > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post i user the default node on site.pp i am a newer to puppet,so i dont know the reason for these problems!! thank you again!!!

    Read the article

  • PHP Not Automatically Updating With ?reload=1

    - by user32007
    A friend built a ranking system on his site and I am trying to host in on mine via wordpress and godaddy. It updates for him but when I load it to my site, it works for 6 hours, but as soon as the reload is supposed to occur, it errors and I get a 500 timeout error. His page is at: jeremynoeljohnson .com/yakezieclub My page is currently at http://sweatingthebigstuff.com/yakezieclub but when you ?reload=1 it will give the error. Any idea why this might be happening? Any settings that I might need to change? Here is the top of the index.php file. I'm not sure which part of any of it is messing up. I literally uploaded the same code as him. Here's the reload part: $cachefile = "rankings.html"; $daycachefile = "rankings_history.xml"; $cachetime = (60 * 60) * 6; // every 6 hours, the cache refreshes $daycachetime = (60 * 60) * 24; // every 24 hours, the history will be written to - or whenever the page is requested after 24 hours has passed $writenewdata = false; if (!empty($_GET['reload'])) { if ($_GET['reload']== 1) { $cachetime = 1; } } if (!empty($_GET['reloadhistory'])) { if ($_GET['reloadhistory'] == 1) { $daycachetime = 1; $cachetime = 1; } } if (file_exists($daycachefile) && (time() - $daycachetime < filemtime($daycachefile))) { // Do nothing } else { $writenewdata = true; $cachetime = 1; } // Serve from the cache if it is younger than $cachetime if (file_exists($cachefile) && (time() - $cachetime < filemtime($cachefile))) { include($cachefile); echo ""; exit; } ob_start(); // start the output buffer ?

    Read the article

  • Concurrent users with Quickbooks?

    - by airietis
    I work in a company with 3 people who regularly use the same Quickbooks file. However, they work remotely on different networks. I need to implement a solution that allows all three of us to access Quickbooks at the same time remotely (and each make changes at the same time). We have a spare desktop PC that can be utilized as a server. So, my question is: what is the cheapest and most hassle-free solution to solving this problem? I've considered using application cloud hosting, however, it is very expensive ($40 per user a month) and we are on a tight budget. Is it possible to install Quickbooks on my own server, and have them connect to it remotely? If so, what is the best way to accomplish this? Remote desktop protocol? Or is there a built in feature for this with Quickbooks Premier 2013? EDIT: As MDMarra mentioned, I am looking for a solution that offers true simultaneous access. Will using a dedicated server and having users connect to a VPN be a viable solution?

    Read the article

  • Debian 100% cpu every 30 minutes but not loggable?

    - by user654123
    I have a Debian 7 x64 machine running with Digital Ocean that has every 30 Minutes a 100% cpu usage for about 1 minute. A couple of days ago it stayed there for a couple of hours so the server finally crashed and I had to repair my Mysql databases. The server is a pure webserver running apache2 and Mysql. I tried tracing which processes use the cpu but with no luck. The script I used: #!/bin/sh while true; do ps -A -eo pcpu,pid,user,args | sort -k 1 -r | head -3 >> proclog.txt; echo "\n" >> proclog.txt; sleep 2; done I was monitoring htop as well while this was happening, but the top processess' cpu usage didn't add up to ~15% even though htop's cpu meter showed constant 100%. htop was configured to show all users' processess, user- and kernel-threads. Edit: By stopping Apache2 & Mysql prior to the expected 100% usage I can tell both are not responsible for it. The 100% usage occurred anyway. This is what the graph looked like the past hours:

    Read the article

  • can't install anything anymore with apt-get

    - by Aymane Shuichi
    Welcome this is the log I have when trying to install anything (php5-fpm after removing it) apt-get install php5-fpm Reading package lists... Done Building dependency tree Reading state information... Done php5-fpm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up php5-fpm (5.4.4-14+deb7u10) ... insserv: warning: script 'S55IptabLes' missing LSB tags and overrides insserv: warning: script 'S55IptabLex' missing LSB tags and overrides insserv: There is a loop between service IptabLes and mountnfs if started insserv: loop involving service mountnfs at depth 8 insserv: loop involving service networking at depth 7 insserv: loop involving service mountnfs-bootclean at depth 10 insserv: There is a loop between service rc.local and mountall if started insserv: loop involving service mountall at depth 6 insserv: loop involving service checkfs at depth 5 insserv: loop involving service kbd at depth 11 insserv: There is a loop between service rc.local and mountall-bootclean if started insserv: loop involving service mountall-bootclean at depth 7 insserv: loop involving service urandom at depth 9 insserv: There is a loop between service IptabLes and mountdevsubfs if started insserv: loop involving service mountdevsubfs at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop at service rc.local if started insserv: There is a loop at service IptabLes if started insserv: Starting IptabLes depends on rc.local and therefore on system facility `$all' which can not be true! (x99 times repeated ) insserv: Max recursions depth 99 reached insserv: loop involving service postfix at depth 2 insserv: There is a loop between service IptabLes and udev if started insserv: loop involving service mountkernfs at depth 1 insserv: loop involving service IptabLes at depth 1 Now here is the error i get insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing php5-fpm (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: php5-fpm E: Sub-process /usr/bin/dpkg returned an error code (1) The biggest operation I held before this was updating nginx from 1.2 to 1.6 and it was thanks to this site : here is the link : How to upgrade nginx from 1.2 to 1.6 on debian 7 Please help !

    Read the article

  • Prevent 'Run-time error '7' out of memory' error in Excel when using macro

    - by MasterJedi
    I keep getting this error whenever I run a macro in my excel file. Is there any way I can prevent this? My code is below. Debugging highlights the following line as the issue: ActiveSheet.Shapes.SelectAll My macro: Private Sub Save() Dim sh As Worksheet ActiveWorkbook.Sheets("Report").Copy 'Create new workbook with Sheets("Report"(2)) as only sheet. Set sh = ActiveWorkbook.Sheets(1) 'Set the new sheet to a variable. New workbook is now active workbook. sh.Name = sh.Range("B9") & "_" & Format(Date, "mmyyyy") 'Rename the new sheet to B9 value + date. With sh.UsedRange.Cells .Value = .Value 'eliminate all formulas .Validation.Delete 'remove all validation .FormatConditions.Delete 'remove all conditional formatting ActiveSheet.Buttons.Delete ActiveSheet.Shapes.SelectAll Selection.Delete lrow = Range("I" & Rows.Count).End(xlUp).Row 'select rows from bottom up to last containing data in column I Rows(lrow + 1 & ":" & Rows.Count).Delete 'delete rows with no data in column I Application.ScreenUpdating = False .Range("A410:XFD1048576").Delete Shift:=xlUp 'delete all cells outwith report range Application.ScreenUpdating = True Dim counter Dim nameCount nameCount = ActiveWorkbook.Names.Count counter = nameCount Do While counter > 0 ActiveWorkbook.Names(counter).Delete counter = counter - 1 Loop 'remove named ranges from workbook End With ActiveWorkbook.SaveAs "\\Marko\Report\" & sh.Name & ".xlsx" 'Save new workbook using same name as new sheet. ActiveWorkbook.Close False 'Close the new workbook. MsgBox ("Export complete. Choose the next ADP in cell B9 and click 'Calculate'.") 'Display message box to inform user that report has been saved. End Sub Not sure how to make this more efficient or to prevent this error.

    Read the article

  • trying to setup multiple primary partitions on ubuntu linux [migrated]

    - by JohnMerlino
    I currently have ubuntu desktop installed on a harddrive. I want to partition the harddrive so that I can reserve 30 gigs for ubuntu server and 30 gigs for ubuntu desktop. The drive has 300 gigs available. Right now I am booting from dvd drive and installing ubuntu server. I selected "Guided partitioning" and created a 30 gig primary partition of Ext4 journaling filesystem, set "yes, format it" for format partition and set bootable flag to on. I intend to use this 30 gig partition to hold ubuntu server and allow me to boot from it. Now I have two other partitions. They are both set to "logical", one is currently using 285.8 gigs and is using ext4 (when I try to set bootable flag to true, it gives a warning "You are trying to set the bootable flag on a logical partition. The bootable flag is only useful on the primary partitions"). More alarming it says "No existing file system was detected in this partition". Actually, Im thinking that this is the parittion that is supposed to be holding my current Ubuntu Desktop. And of course I want this to be bootable and be a primary partition, so I could dual boot from this and the server partition. Now the third partition is also set to logical and it is being used as swap area. My question is regarding that second partition. Its supposed to be a primary partition thats holding my existing ubuntu desktop edition. How do I switch it to primary and to make sure that its pointing to my existing desktop installation?

    Read the article

< Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >