Search Results

Search found 28756 results on 1151 pages for 'op amp'.

Page 219/1151 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • Web Hosting: Any web host that supports files more than 50,000 in number?

    - by Devner
    Hi all, For my PHP & mySQL based application, I am trying to buy website hosting from a host who does not have a limit on the number of files I carry in my hosting account. Almost all the websites have a common limit of 50,000 files (some websites call it 50,000 nodes). The rest(to the extent of my search) are not even close. I have gone through the various websites, Googled lot of information, have spoken with the customer service of the hosting companies and they said that they have a limit of 50,000 files and that's why they call it the LIMIT. Now I have my application, which is a kind of social networking website, where people can upload various files of varying file size. So say if 50,000 users were to join the website and upload 1 file each, the limit of 50,000 will be reached very easily and my 50,001 customer will start facing file upload problems (& so will my account). So I would like to know if there's any website hosting services that do NOT levy such restrictions. In summary, I need the following options: No maximum file limit (more than 50,000 files in account). No maximum file upload limit in server setting (10MB, 12MB, 15MB, 20MB, etc.). Ability to upload files of various types (zip, flv, jg, png, etc.). Ability to stream Audio and Video (live audio & video not necessary). Access to .htaccess Access to php.ini, my.cnf or my.ini (this would be a plus) Supports SSL. Provides dedicated hosting(& IP) as well. Monthly payments without contracts are a plus. If you know of any such website hosting services, please post a reply ( a link to the same will be appreciated ). Thank you.

    Read the article

  • SQL Server 2005 to 2008 DB attach elp please!

    - by Brandon
    I have SQL Server 2005 Standard on my personal machine. I created a very big DB about 21 gb. I made a backup and transferred the .bak file via an ftp program to my dedicated server. I have SQL Server 2008 Enterprise Edition on my dedicated server. I tried restore the transferred .bak file but got an error. I posted the error on here and was told the database is corrupt. How? I don't know. The connection was not interrupted during the ftp transfer. The DB works on my own machine. So then I detached the db on my own machine and transferred the mdf and ldf file to my dedicated server through ftp again and again there were not interruptions. Now I try to attach the db and get this error: The header for file 'DB.mdf' is not a valid database file header. The FILE SIZE property is incorrect. (Microsoft SQL Server, Error: 5172) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.1442&EvtSrc=MSSQLServer&EvtID=5172&LinkId=20476 I already wasted 21 gb transferring the .bak file. Now I used another 21 to transfer mdf and additional ldf file. Please tell me there's a solution. The db can detach and attach fine on my machine in sql server 2005 but not SQL server 2008 on my server.

    Read the article

  • Bash shell prompt: where is $RET?

    - by Evgeni Sergeev
    I was reading this https://wiki.archlinux.org/index.php/Color_Bash_Prompt and ended up with the following: # Stores the status of each command in $RET PROMPT_COMMAND='RET=$?;' # A colour. RED_SHELL='\e[0;36m' # Prints "Status 1" if RET is 1, for example. RET_VISUALISE='$(if [[ $RET != 0 ]]; then echo -ne "Status \[$RED_SHELL\]$RET\n" && RET=0; fi;)' # What to print for each prompt. PS1="$RET_VISUALISE\[\e]0;\w\a\]\n\[\e[32m\]\u@\h \t \[\e[33m\]\w\[\e[0m\]\n\$ " This does almost what I want, except when I press Enter,Enter,Enter multiple times after a command that returned status != 0. In this case it prints "Status 1" every time I press Enter. This is what the && RET=0; part was supposed to get rid of. Also, I don't understand why env | grep RET only shows the PS1 contents. What is the scope of $RET ?

    Read the article

  • DVI splitter not working as expected/confusion between DVI-D and -I

    - by Freakishly
    Hey guys, thanks for looking. I have an ATI FirePro™ V3700 in my desktop machine, and I have been running a dual-monitor setup quite effortlessly, thanks to the two DVI ports on the card. I came upon a third monitor, and wanted to extend my desktop to 3 screens, so I purchased a DVI splitter from Amazon. Now, I can only duplicate the second monitor onto the third, not extend it. I've tried all possible combinations of input to no avail. Here's the setup: The ATI FirePro™ V3700 has two Dual-Link DVI-I outputs The splitter splits a single Dual-Link DVI-I port into two Dual-Link DVI-I outputs Two of the monitors are NEC E222W, and the third monitor is a Dell 2001FP. Each monitor has one D-Sub and one Dual-Link DVI-D input. Cables going from the video card to the monitors are two Dual-Link DVI-D to the NECs and one Single-Link DVI-D to the Dell. Is the problem likely with the DVI-D/DVI-I mismatch? Or is it with the cable on the Dell that is only a Single-Link? The cables are easily replaceable, the monitors not so much. Thanks for your time, I really appreciate it. http://www.amd.com/us/products/workstation/graphics/ati-firepro-3d/v3700/Pages/v3700-specs.aspx http://www.amazon.com/Cables-Unlimited-DVI-D-Splitter-PCM-2260/product-reviews/B000H09RFM/ref=dp_top_cm_cr_acr_txt?ie=UTF8&showViewpoints=1 www dot newegg dot com/Product/Product.aspx?Item=N82E16824002495 accessories dot us dot dell dot com/sna/PopupProductDetail.aspx?cs=19&l=en&c=us&sku=320-1578 Apologies for the fudged links, I'm new here and they won't let me post more than two :P

    Read the article

  • Nginx flv audio pseudo stream works but video is not loading

    - by sarah
    I am working on a development server for a company & they want nginx webserver to work with. So the requirements for their company is, it should be capable of doing following things i.e hotlink protection, mp4 & flv pseudo stream & secure streaming. However nginx fulfills their requirements and i am configuring their server from past 2 days as i am new to this field so i've only acheived hotlinking prevention in past 2 days. But the problem on which i am stuck is flv pseudo streaming, to make work to mp4 pseudo stream it was just a piece of paper but i am really fuc*ed up with flv pseudo stream. I have converted my flv videos with flvmdi tools to insert many keyframes but the problem is , when i try to seek video from following keyframes that are generated by flvmdi i.e test.flv?start=2681223, video does not load but audio pseudo works fine. So it means no problem with my flv configuration in nginx.conf file. And the forum that i used to compile my nginx-1.2.1 is http://h264.code-shop.com/trac/wiki/Mod-H264-Streaming-Nginx-Version2 & by adding additional module --with-http_flv_module. This forum is really active, hopes i will resolve my problem as soon as you guys will provide me some guide.

    Read the article

  • Diagnose remote desktop freezes in Windows 7 when no BSOD?

    - by Paul Smith
    Okay, I'm getting no joy from Asus or Microsoft on this, so hoping for some clues on how to narrow down the cause. I have very frequent OS freezes, always & only when running Remote Destkop Client (mstsc) in Windows 7 x64. I never have a bluescreen, and there is never a minidump. The display & input just freezes -- no keyboard, no mouse, and sound will just continue the last wavelength if any. So far, I can't find a way to trap the hang given that there's no bluescreen; advanced startup & recovery settings for system failure are "Write an event" checked, "Automatically restart" checked, and "Kernel memory dump". I've updated to the lasted BIOS, and tried a few different graphics drivers, both generic & ATI. I've also tried disabling Aero, and everything about the remote desktop experience (incrementally unchecked every box in the mstsc - options - experience tab), even disabled/unplugged external monitor to make sure it wasn't a dual-monitor issue. My specs are: Asus G73jh notebook 8GB RAM ATI Mobility Radeon HD 5800 Series graphics (recently tried driver versions 8.791.0.0, 8.801.0.0) American Megatrends G73jh.211 BIOS (7/27/2010) Windows 7 Home Premium x64 Windows Memory Diagnostic passed all of the following at least 3 times with no errors: MATS+ INVC LRAND Stride6 WMATS+ WINVC This notebook is better than most at removing heat (laudable vent design), so I'm not inclined to suspect thermal causes (especially since running 1080p video for hours has never caused a freeze, but mstsc does, reliably, within 5 minutes to an hour). This did seem to start happening after a Windows Update, but I've since reverted every patch applied since a week before the first occurrence, with no joy. (And I'd only had the PC for a couple weeks before that, so it could have been chance + less actual time spent remoting at the beginning.) I'm at my wits end, and I bought this laptop primarily as a remote terminal client (go figure, right?) Any ideas on how to identify the cause of this? Thanks!

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. What is the mistake if any one can point out in above configuration, or else I need to check any thing else? sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK I am using Apache2 on Ubuntu 12.04

    Read the article

  • umask seems to vary by user

    - by paullb
    I've got a development Ubuntu system for which I have several users: myself (with full sudo) and about 5 other users. (I've set up the system so everything in this respect is still at its default setting) I'm trying to set the system up so that multiple people can collaborate in a single directory by using grouing and I want the default permissions to be 664. However when some users edit files the permissions were 644. After a lot of investigating most users have a umask (checked at the prompt) of 0002 and when they create files they are 664 (as expected) but there are 2 (myself and one other) who have 0022 umask (so the files that come out are 644 and nobody else can write to them). I've looked everywhere but can't figure out why a couple users wind up with a different umask e.g. there is nothing the .bash_profile or anything like that) Any ideas for the source of the discrepancy? /etc/bashrc if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi /etc/profile if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi EDIT: My (bad) ~/.bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions export LANG=en_US.utf8 Other user (good) .bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions

    Read the article

  • How to access original files from before a symlink gets updated, which have since been moved to another dir

    - by Luke Cousins
    We have a website and our deployment process goes somewhat like the following (with lots of irrelevant steps excluded) echo "Remove previous, if it exists, we don't need that anymore" rm -rf /home/[XXX]/php_code/previous echo "Create the current dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/php_code/current echo "Create the var_www dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/var_www echo "Copy current to previous so we can use temporarily" cp -R /home/[XXX]/php_code/current/* /home/[XXX]/php_code/previous/ echo "Atomically swap the symbolic link to use previous instead of current" ln -s /home/[XXX]/php_code/previous /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live # Rsync latest code into the current dir, code not shown here echo "Atomically swap the symbolic link to use current instead of previous" ln -s /home/[XXX]/php_code/current /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live The problem we are having and would like help with is that, the first thing any website page load does is work out the base dir of the application and define it as a constant (we use PHP). If then during that page load a deployment occurs, the system tries to include() a file using the original full path and will get the new version of that file. We need it to get the old one from the old dir which has now moved as in: System starts page load and determines SYSTEM_ROOT_PATH constant to be /home/[XXX]/var_www/live or by using PHP's realpath() it could be /home/[XXX]/php_code/current. Symlink for /home/[XXX]/var_www/live get updated to point to /home/[XXX]/php_code/previous instead of /home/[XXX]/php_code/current where it did originally. System tries to load /home/[XXX]/var_www/live/something.php and gets /home/[XXX]/php_code/current/something.php instead of /home/[XXX]/php_code/previous/something.php I'm sorry if that is not explained very well. I'd really appreciate some ideas on how to get around this problem if someone can. Thank you.

    Read the article

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • Is it better to always copy and delete, rather than move?

    - by nbolton
    Generally speaking, I find myself panicking when I realise that if I cancel a file move, it could cause the target or source to be incomplete. This question applies to Windows and Unix-based platforms. I can never remember exactly how the move command works in either case. For example, if you're moving a directory; does it copy the entire directory, then delete it after, or does it copy then delete each file individually? I always realise after typing something like, mv verybigdir dest that I really should have typed cp -R verybigdir dest && rm verybigdir (where the && operator only moves to the next command if the first was successful) -- or is this pointless? What happens exactly when I press Ctrl+C half way through a move? Likewise, what exactly happens on Windows when I press the cancel button? I can't count the number of times I've moved something (the last time was when using svn) and had two directories, with split contents. I guess the answer is difficult, because not all applications move groups of files in the same way.

    Read the article

  • How to resume XMPP groupchat window in Irssi (using bitlbee)?

    - by mcnesium
    I use Bitlbee to chat in XMPP-networks within my IRC-client Irssi. This works great so far, and recently I started using XMPP Multi User Chats as an alternative to IRC-channels. I set up a channel using chat add <account> <[email protected]> in the &bitlbee control window, set chan <room> set autojoin true and entered /join #room in the &bitlbee window to join that groupchat. It then appears as a unique Irssi window in the status bar. This seems to work ok too, but with one exception: Since I idle in the channels 24/7 my irssi has to cope with the every-night-24h-DSL-disconnection by the ISP. After it automatically reconnects, it does kind of rejoin that XMPP-groupchat, but the traffic of the groupchat does not go back to the unique irssi window, but keeps flooding &bitlbee with messages from root telling me about a Groupchat Message from unknown JID <jid>: <message> - which is the traffic of the groupchat. The unique groupchat window is gone after the reconnect, and I will again have to go /join #room in &bitlbee to get it back. Even worse, the window number is unused before I rejoin the groupchat, and if I get a query from any network, the window nests in that unused window spot, so I will first have to remove that query from the spot, and then move the rejoined groupchat to that window number. I want my groupchat window to resume after the reconnect just like every other IRC channel too. How can I get this done? Any ideas?

    Read the article

  • Never getting a JSON response when running server-side PHP proxy script but I do with others

    - by Dohk
    I'm on PHP 5.3.4 and Apache 2.2 btw So I'm using (or trying to use) Simple PHP Proxy (Simple PHP Proxy) I enter a URL at his example page at SPP Example Page and it works fine, I see the JSON response and all the headers. However, when I copy the exact URL, only changing the URL to now have localhost, I get both empty headers and no JSON. Assuming that the script on his site is the same I downloaded, could this be due to a multitude of things or a setting in Apache and/or the PHP ini? So for example: benalman.com/code/projects/php-simple-proxy/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 That will get me a ton of info back Now changing to localhost http://localhost/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 {"headers":[],"status":{"url":"https:\/\/github.com\/","content_type":"text\/html","http_code":301,"header_size":194,"request_size":182,"filetime":-1,"ssl_verify_result":0,"redirect_count":1,"total_time":0.094,"namelookup_time":0,"connect_time":0.047,"pretransfer_time":0,"size_upload":0,"size_download":185,"speed_download":1968,"speed_upload":0,"download_content_length":185,"upload_content_length":0,"starttransfer_time":0,"redirect_time":0.047,"certinfo":[]},"contents":null} I even went basic and just used some curl and of course, empty objects being returned other than false for my content and the url I set in my JSON. Any help is deeply appreciated or any ideas.

    Read the article

  • How to continue an HTTrack mirroring session from the command line?

    - by isme
    I want to drive my mirroring project using the Command Prompt instead of the WinHTTrack interface so that I can script and schedule the mirroring session more easily. The output of httrack --help gives a simple command for continuing an interrupted mirroring session: example: httrack --continue continues a mirror in the current folder When I try httrack --continue in my HTTrack project folder, all I get is output like this: Example: -%F "<!-- Mirrored from %s by HTTrack Website Copier/3.x [XR&CO'2010], %s -->" * Option %F needs to be followed by a blank space, and a footer string With each parameter on a new line for readability, the first line of my doit.log file looks like this: -qiC1%P0s0b0u1j0%s%u0N0%I0p1DaK0c1T30H0%kf2E1800A25000%c0.1%f#f -F "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" -%F "" -%l "en, en, *" http://saa.gov.uk/search.php?SEARCHED=1&SEARCH_TABLE=council_tax&SEARCH_TERM=City+of+Edinburgh&DISPLAY_COUNT=100 -O1 "C:\\Users\\Iain\\Projects\\Council Tax Analysis\\Code\\HTTrack\\Council Tax Valuation List" -* \ +*search.php?SEARCHED=1* -*DISPLAY_MODE=FULL* The parameter %F "" should tell HTTrack to use an empty footer. I used the WinHTTrack interface to create the project and start the mirroring session. I can interrupt and continue the mirroring session using the interface. The HTML files saved by WinHTTrack have no footer.

    Read the article

  • Is there any reason this cronjob would fail in cron, but not on the command line?

    - by Treffynnon
    I have written a little one liner that will email me when a list of files changes - I used sha512 to generate a list of hashes and then periodically check that those hashes still match. */5 * * * * /usr/bin/sha512sum --status -c /sha512.sumlist && echo "Success" > /dev/null || echo "Check robots.txt and index.html in /var/www as staging sites are now potentially exposed to the world and the damned googlebot" | /usr/bin/mail -s "Default staging server files have changed" [email protected] It works fine on the command line with: /usr/bin/sha512sum --status -c /sha512.sumlist && echo "Success" > /dev/null || echo "Check robots.txt and index.html in /var/www as staging sites are now potentially exposed to the world and the damned googlebot" | /usr/bin/mail -s "Default staging server files have changed" [email protected] As soon as I run it as a cronjob though it emails every time it runs with the failure message instead of only when the sha512sum check should fail. Is there something silly I have missed in a rush? I forgot to mention that I am running an Ubuntu machine.

    Read the article

  • PHP Not Automatically Updating With ?reload=1

    - by user32007
    A friend built a ranking system on his site and I am trying to host in on mine via wordpress and godaddy. It updates for him but when I load it to my site, it works for 6 hours, but as soon as the reload is supposed to occur, it errors and I get a 500 timeout error. His page is at: jeremynoeljohnson .com/yakezieclub My page is currently at http://sweatingthebigstuff.com/yakezieclub but when you ?reload=1 it will give the error. Any idea why this might be happening? Any settings that I might need to change? Here is the top of the index.php file. I'm not sure which part of any of it is messing up. I literally uploaded the same code as him. Here's the reload part: $cachefile = "rankings.html"; $daycachefile = "rankings_history.xml"; $cachetime = (60 * 60) * 6; // every 6 hours, the cache refreshes $daycachetime = (60 * 60) * 24; // every 24 hours, the history will be written to - or whenever the page is requested after 24 hours has passed $writenewdata = false; if (!empty($_GET['reload'])) { if ($_GET['reload']== 1) { $cachetime = 1; } } if (!empty($_GET['reloadhistory'])) { if ($_GET['reloadhistory'] == 1) { $daycachetime = 1; $cachetime = 1; } } if (file_exists($daycachefile) && (time() - $daycachetime < filemtime($daycachefile))) { // Do nothing } else { $writenewdata = true; $cachetime = 1; } // Serve from the cache if it is younger than $cachetime if (file_exists($cachefile) && (time() - $cachetime < filemtime($cachefile))) { include($cachefile); echo ""; exit; } ob_start(); // start the output buffer ?

    Read the article

  • sql server uninstallation issue

    - by angel
    I'm unable to remove SQL Server 2008 sp1 completely from my system. I'm using windows 7 ultimate. Everytime I try uninstalling it i get the following error. How can I remove it? here is the log: Overall summary: Final result: Failed: see details below Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: Failed: see details below Start time: 2013-06-24 21:10:38 End time: 2013-06-24 21:21:17 Requested action: Uninstall Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\sql_rs_Cpu64_1.log Exception help link: http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22 Machine Properties: Machine name: ABHI-PC Machine processor count: 4 OS version: Windows Vista OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Sql Server 2008 MSSQLSERVER MSRS10.MSSQLSERVER Reporting Services 1033 Enterprise Edition 10.0.1600.22 No Sql Server 2008 Management Tools - Basic 10.0.1600.22 No Package properties: Description: SQL Server Database Services 2008 SQLProductFamilyCode: {628F8F38-600E-493D-9946-F4178F20A8A9} ProductName: SQL2008 Type: RTM Version: 10 SPLevel: 0 Installation edition: ENTERPRISE User Input Settings: ACTION: Uninstall CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini FEATURES: RS,SSMS,SNAC_SDK,CE_RUNTIME,CE_TOOLS,SNAC HELP: False INDICATEPROGRESS: False INSTANCEID: INSTANCENAME: MSSQLSERVER MEDIASOURCE: QUIET: False QUIETSIMPLE: False X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini Detailed results: Feature: SQL Client Connectivity Status: Skipped MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Skipped MSI status: Passed Configuration status: Passed Feature: Reporting Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0xFFD65603 Configuration error description: Input string was not in a correct format. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\Detail.txt Feature: SQL Compact Edition Tools Status: Passed MSI status: Passed Configuration status: Passed Feature: SQL Compact Edition Runtime Status: Skipped MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: There are no scenario-specific rules. Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\SystemConfigurationCheck_Report.htm

    Read the article

  • converting apache rewrite rules to nginx for xenforo

    - by nick
    Hi all, I am migrating some forums from vbulletin 3.8.x to xenforo, and trying to keep my old link structure alive. Basically, XF provides some php files that I can redirect the old url style to and it handles the proper 301 redirection. Regardless of that end, I am having difficulty rewriting the rules which I can only find defined in apache's rewrite style: RewriteRule [\d]+-[^/]+/.+-([\d]+)/([\d]+)/ showthread.php?t=$1&page=$2 [NC,L] RewriteRule [\d]+-[^/]+/.+-([\d]+)/ showthread.php?t=$1 [NC,L] RewriteRule ([\d]+)-[^/]+/([\d]+)/ forumdisplay.php?f=$1&page=$2 [NC,L] RewriteRule ([\d]+)-[^/]+/ forumdisplay.php?f=$1 [NC,L] I have been experimenting and thought this should work, but obviously not: if (!-e $request_filename) { rewrite [0-9a-zA-Z\-]/[0-9a-zA-Z\-]-([0-9])/([0-9])/ /showthread.php?t=$1&page=$2 last; rewrite [0-9a-zA-Z\-]/[0-9a-zA-Z\-]-([0-9])/ /showthread.php?t=$1 last; rewrite ([0-9])-[0-9a-zA-Z\-]/([0-9])/ /forumdisplay.php?f=$1&page=$2 last; rewrite ([0-9])-[0-9a-zA-Z\-]/ /forumdisplay.php?f=$1 last; rewrite ^(.*)$ /index.php last; } old vB showthread format: website.tld/233-website-issues-requests/wiki-down-73789/ new XF showthread format: website.tld/threads/the-wiki-is-down.65509/ old vB forumdisplay format: website.tld/233-website-issues-requests/ new XF forumdisplay format: website.tld/forums/website-issues-and-requests.253/

    Read the article

  • Ping results "echo'd" out to a text file on Windows XP machine has unusual character rather than the outcome

    - by Richard Rice
    Here is the situation. I have a Windows XP machine that I would like to ping another machine and send the output to a text file for review at a later date. This is quite a standard question with several answers on this website and others. The issue I have is that when reviewing the txt file afterwards, the outcome of the ping appears as a strange character. A screenshot of the text file output can be found at the following link. Please note its the first 8 ping outputs that have a strange character. The %%B was me hoping that would output what I wanted! (Cannot upload image to any hosting websites because the company I work for seems to be afraid of them ....) Here is the code I'm using: @echo off For /f "tokens=1-2 delims=/:" %%b in ("%TIME%") do (set mytime=%%a%%b) PING 1.1.1.1 -n 1 -w 1000 >NUL for /f "tokens=*" %%A in ('ping 192.168.5.2 -w 10 -n 1') do (echo %date% %time% %%A >> C:\Temp\Ping_Checks\GarrickSide.txt && GOTO Ping ) :Ping PING 1.1.1.1 -n 1 -w 1000 >NUL for /f "tokens=* skip=2" %%A in ('ping 192.168.5.2 -w 10 -n 1') do (echo %date% %time% %%A >> C:\Temp\Ping_Checks\GarrickSide.txt && GOTO Ping ) This code works fine on a Windows 7 machine, but not on a Windows XP machine. It appears that the echo command isn't outputting the right data, but I don't understand this enough to correct it. Can anyone spot where I'm going wrong?

    Read the article

  • How to set up a server without a hosting control panel

    - by A4J
    I have always used a control panel on my dedicated servers - from cPanel to Plesk to Virtualmin, and I am now considering ditching a CP altogether and manually editing config files. My requirements are fairly simple, I will host multiple sites on the server; some Apache with PHP & Mysql and some Passenger with Rails & Postgres. All will require email smtp/pop. FTP/Stats will not be required. Could someone please give me a quick run-down of what I would need to do - in terms of installing software and configuration? My server will come with a base install of CentOS 6.4 minimal. My thoughts so far: Install/update latest versions of MySQL & Postgres (are they 'safe' out of the box? Or do I need to do anything else like set up root passwords etc?) Install Apache & PHP (again, are the base installs good to go or do they require security tweaks?) Set up nameservers/hostnames/reverse DNS etc (Any guides on how to do this please?) Install Rubygems Install and configure Dovecot and Postfix (any tips on doing this? Or links to how-tos that cover it please?) Set up each website - any links to guides on how to do this? Install/configure firewall (or is the default install good to go?) Any other tips or advice would be greatly appreciated, as would links to guides or how-tos.

    Read the article

  • Same script, different behavior [migrated]

    - by Antoine_935
    I just stumbled upon an interesting bug... Still trying to figure out what is exactly happening. Maybe you can help. First, the context. I'm currently building yet another man to html converter (for some reasons I won't motivate here, but I need it). So, have a look at the screenshot below (see the link), more precisely at the outlined spots. See? On the upper shell, I have &lt ; and &gt ;, that is, escaped html. While on the shell below I have < and directly. But as you can see (or do I seriously need looking glass ?), the command man 2 semget | webmanneris the same on both sides, as is the which webmanner. The two are executed roughly at the same moment, with no modification made to the script between. [Oops, cannot post pictures just yet... Here comes the link] http://aspyct.org/media/webmanner-bug.png But the shell below is older (open about 1 hour ago). Newer shells all print out &lt ;. So my first guess was that it somehow had a cached reference to the old inode of the file, or old blocks or whatever. So I modified parts of the script, at the start and then at the end, to print different messages. And, surprise, the message shown up on both terminals. But still, same difference between &lt ; and <. I'm confused... How to explain that behavior? I'm working on a OSX 10.8 (Mountain Lion) EDIT: OK, there is one big difference: the shell below uses ruby 1.9.3, while above is 1.8.7. Is there any known difference in string handling between the two versions ?

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • GPG - why am I encrypting with subkey instead of primary key?

    - by khedron
    When encrypting a file to send to a collaborator, I see this message: gpg: using subkey XXXX instead of primary key YYYY Why would that be? I've noticed that when they send me an encrypted file, it also appears to be encrypted towards my subkey instead of my primary key. For me, this doesn't appear to be a problem; gpg (1.4.x, macosx) just handles it & moves on. But for them, with their automated tool setup, this seems to be an issue, and they've requested that I be sure to use their primary key. I've tried to do some reading, and I have the Michael Lucas's "GPG & PGP" book on order, but I'm not seeing why there's this distinction. I have read that the key used for signing and the key used for encryption would be different, but I assumed that was about public vs private keys at first. In case it was a trust/validation issue, I went through the process of comparing fingerprints and verifying, yes, I trust this key. While I was doing that, I noticed the primary & subkeys had different "usage" notes: primary: usage: SCA subkey: usage: E "E" seems likely to mean "Encryption". But, I haven't been able to find any documentation on this. Moreover, my collaborator has been using these tools & techniques for some years now, so why would this only be a problem for me?

    Read the article

  • Anyone recommend a program to print multiple HTML files at once for end users?

    - by Keith Bentrup
    I have some clients with multiple html files in folders that are occasionally updated & printed. They would like to be able to print them all at once without having to open each one. I typically do this with a quick command for myself, but I'm unaware of any freeware to do this. After a google search, I'm not finding one, so I'm hoping someone can help. I'd rather not use a script to do this for various security/ease of use/familiarity reasons, I'd rather be able to just point to a simple program they can download and use on their windows desktops. Anyone know of one or some other easy solution to do this? Maybe I'm overlooking the obvious. If anyone's curious, this is what I do for myself (not for my clients): for %h in (*.html) do type "%h" >> all.htm then open all.htm & print. If I need a page break on each doc, I just search and replace in all.htm </body> with <p style="page-break-after:always">&nbsp;</p></body>. It's quick & simple, but too unfamiliar for them. Thanks!

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >