Search Results

Search found 8279 results on 332 pages for 'template toolkit'.

Page 272/332 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • How to copy a bunch of pages? Is there a 3rd party tool?

    - by unknown (yahoo)
    (I asked the following question at the DNN forum, and also at snowcovered. Nobody knew of such an obvious time-saver being for sale. I'm posting here in case anybody knows of a freeware module that might do this.) By "groups of dnn pages", I mean pages that form a hierarchy (not necessary a hierarchy that is headed with a page at the same level as the Home page.) I know that I can copy web pages, one by one, using the admin login via the web-based dnn interface. But, I'd prefer a script or wizard, of some sort (that runs scripts behind the scenes) that can allow me to 1) specify a web page that I want to copy (along with the hierarchy of pages under it) 2) specify the names and titles of the new top-level pages 3) specify whether the contained modules of the top-level page that I want to copy is to be : ( ) New ( ) Copy ( ) Reference (as in the web-based interface) 4) repeat 3) for each of the source pages in the hierarchy that I want to copy You might say that I am looking to do something similar to creating a portal web site based on a template, except that it's not an entirely new website - instead it's a section of the current web site. I might want to do this because I have an organization which is broken into chapters, and I want each chapter to have, say, it's own General Information page (which acts like it's home page), and underneath that, in it's hierarchy, a Contact Info page and an Events page. so: Home Page   General Information Page     Contact Info     Events -- Home Page   General Information Page     Contact Info     Events   General Information Page Kiwanis - Bloomfield     Contact Info     Events   General Information Page Kiwanis - Dayton     Contact Info     Events If I have 200 chapters, I certainly don't want to copy those 3 web pages using the web based interface, as that would take a long time. (And imagine if each chapter's new sub-website had 30 pages!) I just want to specify the parameters of a copy process, then press a button, and let the system do the rest.

    Read the article

  • grepping a substring from a grep result

    - by allentown
    Given a log file, I will usually do something like this: grep 'marker-1234' filter_log What is the difference in using '' or "" or nothing in the pattern? The above grep command will yield many thousands of lines; what I desire. Within those lines, There is usually one chunk of data I am after. Sometimes, I use awk to print out the fields I am after. In this case, the log format changes, I can't rely on position exclusively, not to mention, the actual logged data can push position forward. To make this understandable, lets say the log line contained an IP address, and that was all I was after, so I can later pipe it to sort and unique and get some tally counts. An example may be: 2010-04-08 some logged data, indetermineate chars - [marker-1234] (123.123.123.123) from: [email protected] to [email protected] [stat-xyz9876] The first grep command will give me many thousands of lines like the above, from there, I want to pipe it to something, probably sed, which can pull out a pattern within, and print only the pattern. For this example, using an the IP address would suffice. I tried. Is sed not able to understand [0-9]{1,3}. as a pattern? I had to [0-9][0-9][0-9]. which yielded strange results until the entire pattern created. This is not specific to an IP address, the pattern will change, but I can use that as a learning template. Thank you all.

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • IIS permission configuration issue

    - by Dan
    Sorry the title of this question is a little ambiguous but I don't really have any idea where the issue lies - I'm seeking some clarification of the server error logs. Basically, I had a dedicated server running Windows 2003 and Plesk (v8 I think). Last week the server hardware failed and the entire thing had to be rebuilt from scratch. New hardware was put in, new operating system (Win2008), new Plesk installation (v9.5), new software (MSSQL etc) then all data ported over manually from old C and D drives to restore all 30 client sites. It was hell! All has been okay for a couple of days now but about an hour ago POP! Suddenly all sites went down giving a 500 error. Restarting all services eventually brought everything back online, but I'm now living in total fear. It can - and probably will - happen again. The guys on support gave me the following errors from the server log: The Template Persistent Cache initialization failed for Application Pool 'ASP.NET v4.0 Classic' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. The worker process for application pool 'domain1.com(domain)(2.0)(pool)' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\domain1.com(domain)(2.0)(pool).config', line number '0'. The data field contains the error code. The worker process for application pool 'PleskControlPanel' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\PleskControlPanel.config', line number '0'. The data field contains the error code. The support guys are so ambiguous about this and it scares me horribly. Can anyone positively identify the cause of this error which lead to all client website going offline? What can be done to prevent it from happening again? Any pointers would be very much appreciated! Thanks folks...

    Read the article

  • CentOs 6.3 can't install Git

    - by drivard
    I am trying to install git on an OpenVZ container using the CentOs 6.3 precreated template. When I try the command line yum install git I get the message: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com Setting up Install Process No package git available. Error: Nothing to do As I understand, the git package should be in the centos6 base repository: http://pkgs.org/centos-6-rhel-6/centos-rhel-x86_64/git-1.7.1-2.el6_0.1.x86_64.rpm.html But it doesn't find it, I even have the EPEL and RPMForge repo enabled, but still can't find the git package. yum repolist Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com repo id repo name status base CentOS-6 - Base 4,776 epel Extra Packages for Enterprise Linux 6 - i386 6,523 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 4,501 updates CentOS-6 - Updates 596 vz-base vz-base 3 vz-updates vz-updates 0 repolist: 16,403 The weirdest thing is that my OpenVZ server is running on CentOs 6.3 and I was able to install git without any issue. Can you help me understanding why it doesn't find the package? Thank you in advance.

    Read the article

  • How can I add subdomains of default accepted domain of Exchange 2010

    - by Christoph
    I have an Exchange 2010 that has several accepted domains. Now I want this server to accept - besides the default SMTP domain - all subdomains of the default domain. The documentation in Technet states When you create an accepted domain, you can use a wildcard character (*) in the address space to indicate that all subdomains of the SMTP address space are also accepted by the Exchange organization. For example, to configure Contoso.com and all its subdomains as accepted domains, enter *.Contoso.com as the SMTP address space. It is, however not possible to add e. g. *.contoso.com if contoso.com is already configured. Exchange complains in this case that the domain is already configured. It is also not possible to edit the "value", i. e. the domain name of an accepted domain. I know that I cannot modify the default accepted domain, but changing it to another does not help either, because the domain name itself can never be edited. The last idea was deleting the accepted domain and re-creating it with "*." prepended. This is, however, also impossible because it is of course not possible to delete or modify the default address policy and if a domain name is used in an address template it cannot be removed from the accepted domains. The question is: How can I make my Exchange 2010 server accept any subdomain of its default accepted domain with a wildcard?

    Read the article

  • Logging into Local Statusnet instance on Apache causes browser to download a file

    - by DilbertDave
    I've installed statusnet 0.9.1 on a Windows Server via the WAMP stack and on the whole it seems to be fine. However, when logging in using IE7 or Chrome the browers invoke a file download, i.e. the File Download dialog is displayed. In IE7 the file is called notice with the content below (some parts starred out): <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"> <ShortName>Mumble Notice Search</ShortName> <Contact>david.carson@*****.com</Contact> <Url type="text/html" method="get" template="http://voice.*****.com/mumble/search/notice?q={searchTerms}"></Url> <Image height="16" width="16" type="image/vnd.microsoft.icon">http://voice.*****.com/mumble/favicon.ico</Image> <Image height="50" width="50" type="image/png">http://voice.******.com/mumble/theme/cloudy/logo.png</Image> <AdultContent>false</AdultContent> <Language>en_GB</Language> <OutputEncoding>UTF-8</OutputEncoding> <InputEncoding>UTF-8</InputEncoding> </OpenSearchDescription> In Chrome (Linux and Windows!) the file is called people and contains similar XML. This is not an issue when logging in using FireFox. This is obviously a configuration issue but I'm not having much luck tracking it down. I tested the previous version of Statusnet on an Ubuntu Server VM on our network and it worked fine for months. Thanks In Advance

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • CentOS centralised logging, syslogd, rsyslog, syslog-ng, logstash sender?

    - by benbradley
    I'm trying to figure out the best way to setup a central place to store and interrogate server logs. syslog, Apache, MySQL etc. I've found a few different options but I'm not sure what would be best. I'm looking for something that is easy to install and keep updated on many virtual machines. I can add it to a VM template going forward but I'd also like it to be easy to install to keep the VM complexity down. The options I've found so far are: syslogd syslog-ng rsyslog syslogd/syslog-ng/rsyslog to logstash/ElasticSearch logstash agent in each log "client" to send to Redis/logstash/ElasticSearch And all sorts of permutations of the above. What's the most resilient and light from the log "client" perspective? I'd like to avoid the situation where log "clients" hang because they are unable to send their logs to the logging server. Also I would still like to keep local logging and the rotation/retention provided by logrotate in place. Any ideas/suggestions or reasons for or against any of the above? Or suggestions of a different structure entirely? Cheers, B

    Read the article

  • Kickstart: Serve dynamic kickstart images via a CGI or PHP script?

    - by Stefan Lasiewski
    I'd like to kickstart a couple dozen RHEL6/SL6 servers. However, some of these servers are different and I don't want to create a new ks.cfg file for each class of server. Are there any products which can generate a Kickstart file dynamically on the fly, from a template? For example, if I append a line like this to the KERNEL: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi Then the script ks.cgi can determine what host this is (Via the MAC address), and print out Kickstart options which are appropriate for that host. I could optionally override some options by passing parameters to the script, like this: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi?NODETYPE=production&IP=192.168.2.80 After we kickstart the server, we activate Cfengine/Puppet on this system and manage the system using our favorite Configuration Management product. We're experimenting with xCAT but it is proving too cumbersome. I've looked into Cobbler, but I'm not sure it does this. Update: A roll-your-own solution is discussed in the O'Reilly book: Managing RPM-Based Systems with Kickstart and Yum, Chapter 3. Customizing Your Kickstart Install Dynamic ks.cfg, which echos some of the comments in this thread: To implement such a tool is beyond the scope of this Short Cut, but I can walk through the high-level design. Any such solution would mix a data store (the things that change) with a templating solution (the things that don’t change). The data store would hold the per-machine data, such as the IP address and hostname. You would also need a unique identifier, perhaps the hostname, such that you could pick up a given machine’s data. The data store could be a flat file, XML data, or a relational database such as PostgreSQL or MySQL. In turn, to invoke the system, you pass a machine’s unique identifier as a URL parameter. For example: boot: linux ks=http://your.kickstart.server/gen_config?host-server25 In this example, the CGI (or servlet, or whatever) generates a ks.cfg for the machine server25. But where, oh where, is the code for ks.cgi?

    Read the article

  • Auto Log-Off Windows users - Windows 2003 domain

    - by thehatter
    Hi! I am trying to make windows clients automatically log off after some time, I have been trying to use the winexit.scr which I have seen working else where in a similar environment. After working though these instructions (I did read the comments and notice the original ADM provided is buggy) I've had no joy what so ever! Winexit.scr refuses to read any settings in the registry, even while using a test account I can access the required reg key(s); edit, add, and remove values. Essentially winexit.scr always uses it's default values: 30 second timeout, no forced log-out. What I really want is a 30 minute timeout with a forced log-out, closing all the users apps etc. I've tried removing and re-adding the ADM template, creating the GPO from scratch several times, giving various registry permissions - including full control to "Everybody" just for fun! Oh, clients are all win XP SP3, DC is win 2003 R2 SP2. So, can anybody suggest something? Cheers!

    Read the article

  • Permissions nightmare - tried all I know

    - by Ben
    Working on a new client's dev site, which is a wordpress install on a Plesk box. I have SSH root access, and FTP access through a separate account. What I've done so far Initially I couldn't make any changes to any files at all. The permissions on all the template files looked a little screwy (644), so I figured change them to allow group, and add myself to the group: CHMOD Recursive on the theme folder to set everything to 664 Quickly realised I'd broken it, set the folders to 755, kept files as 664 Ownership on all files is a mixture of root:root and 500:500 (there is no user nor group with the ID of 500 on the server). Added myself to the group 'root' so I could modify the files too The Problem This worked OK, in terms of being able to edit the existing files, so I began working. However, I can't upload to the directory, even having run CHOWN -R root:root templatefolder/ and being in the root group. I feel like I must be missing something obvious, and it's doing my head in. Questions: Files in the install owned by 500 with group 500 - I've looked in /etc/group and /etc/passwd and there is no user nor group with this ID. Is that left over from another developer's setup or the previous server (they moved recently)? Is being in the 'root' group enough, or do I need to own the theme folder as 'myftpuser' in order to upload and create new files? Like I say, I have edit access, so I got myself this far. I'm now questioning what to do next!

    Read the article

  • Recommendation for robust, customizable, open source, Java servlet-based forum software?

    - by Erik Hermansen
    There is a lot of forum software out there, but it seems to me that a lot of the popular choices are PHP-based. And for my project, I'd like something based on Java servlets so my team can make customizations to it. Another important feature is that I can completely change the pages to hide unwanted elements without too much work. So I'm looking either for a template system or easily editable scripts (i.e. JSPs) that have a clean view separation. Just having skin changes or CSS customization is not enough. I understand that if I have open source, I can change anything I want, but my point is that it should be easy and not requiring mastery of a complex code base. Finally, I want something that has been around for at least a year and deployed on some high-traffic sites. Clustering support (one database, multiple web servers) is highly desirable. Up-time is crucial since I have an SLA to support. What do you think?

    Read the article

  • Trouble with backslash characters and rsyslog writing to postgres

    - by Flimzy
    I have rsyslog 4.6.4 configured to write mail logs to a PostgreSQL database. It all works fine, until the log message contains a backslash, as in this example: Jun 12 11:37:46 dc5 postfix/smtp[26475]: Vk0nYDKdH3sI: to=<[email protected], relay=----.---[---.---.---.---]:25, delay=1.5, delays=0.77/0.07/0.3/0.35, dsn=4.3.0, status=deferred (host ----.---[199.85.216.241] said: 451 4.3.0 Error writing to file d:\pmta\spool\B\00000414, status = ERROR_DISK_FULL in "DATA" (in reply to end of DATA command)) The above is the log entry, as written to /var/log/mail.log. It is correct. The trouble is that the backslash characters in the file name are interpreted as escapes when sent to the following SQL recipe: $template dcdb, "SELECT rsyslog_insert(('%timereported:::date-rfc3339%'::TIMESTAMPTZ)::TIMESTAMP,'%msg:::escape-cc%'::TEXT,'%syslogtag%'::VARCHAR)",STDSQL :syslogtag, startswith, "postfix" :ompgsql:/var/run/postgresql,dc,root,;dcdb As a result, the rsyslog_insert() stored procedure gets the following value for as msg: Vk0nYDKdH3sI: to=<[email protected], relay=----.---[---.---.---.---]:25, delay=1.5, delays=0.77/0.07/0.3/0.35, dsn=4.3.0, status=deferred (host ----.---[199.85.216.241] said: 451 4.3.0 Error writing to file d:pmtaspoolB The \p, \s, \B and \0 in the file name are interpreted by PostgreSQL as literal p, s, and B followed by a NULL character, thus early-terminating the string. This behavior can be easiily confirmed with: dc=# SELECT 'd:\pmta\spool\B\00000414'; ?column? -------------- d:pmtaspoolB (1 row) dc=# Is there a way to correct this problem? Is there a way I'm not finding in the rsyslog docs to turn \ into \\?

    Read the article

  • Word 2010, Multiple Columns, Vertical center one column only

    - by Nancy N Jones
    I am creating a document with two columns in Microsoft Word 2010. I want the first column to be centered vertically. I want the second column to be on the same page and the vertical placement to be from the top. I highlight my text in the first column that I want centered vertically, then go to Page Layout Margins Custom Margins Layout, you can choose to center the vertical alignment. I have choosen the "Section Start" to be "Column" and also tried "Continuous." In all cases it always shifts all of my second column information to a new page. I don't want my second column text to be on a new page, I want it to be on the same page and vertically aligned from the top--not the center. Am I understanding the functionality of the Section Start on the Layout tab correctly? Maybe the page layout isn't the correct formatting to use. What I am really doing is formatting columns. I haven't found anywhere to format the columns for this. Am I missing some important column formatting features? I know that I can use the paragraph formatting and add space above the first line of text to make it look like it is centered vertically. However, this is a template for a master document and will be changed frequently. I really would like the first column text to be automatically formatted to be centered vertically without having to go in and manually change the space above the paragraph every time. Your assistance would be greatly appreciated.

    Read the article

  • Installing checkinstall on x86_64 bit

    - by SephMerah
    I downloaded the source for check install. checkinstall-1.6.2.tar.gz. I then tar -xzvf checkinstall-1.6.2.tar.gz Then I make. It prints this error: [root@ip-50-63-180-135 checkinstall-1.6.2]# make for file in locale/checkinstall-*.po ; do \ case ${file} in \ locale/checkinstall-template.po) ;; \ *) \ out=`echo $file | sed -s 's/po/mo/'` ; \ msgfmt -o ${out} ${file} ; \ if [ $? != 0 ] ; then \ exit 1 ; \ fi ; \ ;; \ esac ; \ done make -C installwatch make[1]: Entering directory `/home/sofiane/checkinstall-1.6.2/installwatch' gcc -Wall -c -D_GNU_SOURCE -DPIC -fPIC -D_REENTRANT -DVERSION=\"0.7.0beta7\" installwatch.c installwatch.c:2942: error: conflicting types for 'readlink' /usr/include/unistd.h:828: note: previous declaration of 'readlink' was here installwatch.c:3080: error: conflicting types for 'scandir' /usr/include/dirent.h:252: note: previous declaration of 'scandir' was here installwatch.c:3692: error: conflicting types for 'scandir64' /usr/include/dirent.h:275: note: previous declaration of 'scandir64' was here make[1]: *** [installwatch.o] Error 1 make[1]: Leaving directory `/home/sofiane/checkinstall-1.6.2/installwatch' make: *** [all] Error 2 I searched extensively on this issue and this solution looks promising. Should I attempt to install checkinstall as an fpm? What would be the best way to go about that? Centos 6.3 x86_64

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • Wordpress advanced navigation [closed]

    - by codedude
    Ok, so I've made a site template in pure html and I need to convert it into wordpress. The navigation code looks like this: <div id="header"> <div class="wrap"> <ul id="leftnav"> <li><a href="about.html">About</a><span class="arrow"></span></li> <li><a href="work.html">Work</a><span class="arrow"/></li> </ul> <div id="logo"> <h1><a href="index.html">T.Wiersema</a></h1> <span class="arrow"></span> </div> <ul id="rightnav"> <li><a href="blog.html">Blog</a><span class="arrow"/></li> <li class="contact"><a href="#">Contact</a><span class="arrow"/></li> </ul> </div> </div> The leftnav is absolutely positioned to the left and right nav is absolutely positioned to the right with the logo centered between the two. Notice also how I have a span inside each li element. Is there a way to reproduce this in my wordpress theme without just manually copying this into it?

    Read the article

  • Failed to re-publish a page - Tridion 2011 SP1

    - by Wilson Yu
    We are getting some strange error when re-publishing the same page. The page was published successfully the first time and we can see the page from presentation server. It failed with the following error (see below) when we tried to publish it again (no change to page). The page ran OK within template builder and we got the correct html output, it failed in the last committing deployment step (Prepare Transport, Transporting, Preparing Deployment and Deploying are all successful). Once it fails to publish the second time, it always fails to publish, and we can't un-publish it either. Also when we make a copy of the failed page and create a new page, we can publish the new page first time, the new page then fails to publush the second time with the same error. Does anyone know what would cause this error? any help would be greatly appreciated. Here is the error msg: Committing Deployment Failed Phase: Deployment Prepare Commit Phase failed, Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: "", Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: ""

    Read the article

  • Installing and running two postgresql versions on different ports (or two instances of same server)

    - by Andrius
    I have postgresql 9.1 installed on my machine (Ubuntu). I need another postgresql server that would run next to the old one. Exact version does not matter, but I'm thinking of using 9.2 version. How could I properly install and run another postgresql version without screwing old one (like upgrading). So those versions would run independently on different ports. Old one on 5432 and new one on 5433 for example. The reason I need this is for two OpenERP versions databases. If I run two OpenERP servers (with different versions) on single postgresql port, it crashes because new OpenERP version detects old versions database and tries to run it, but it crashes because it uses another schemes. P.S or maybe I could just run same postgresql server on two ports? Update So far I tried this: /usr/lib/postgresql/9.1/bin/pg_ctl initdb -D main2 It created new cluster. I changed port to 5433 in new clusters directory postgresql.conf file. Then ran this: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I got response server starting. But when I tried to enter new cluster's template database with: psql template1 -p 5433 I got this error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5433"? Also now when I try to stop server with: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I get this error: pg_ctl: PID file "main2/postmaster.pid" does not exist Is server running? So I don't understand if server is running and what I'm missing here? Update Found what was wrong. Stupid me. I didn't notice that when I changed port in .conf file, that line was commented already. So actually I didn't change anything first time, but thought I did and it used default 5432 port.

    Read the article

  • Preventing applications from performing run once tasks for multiple users

    - by JohnyV
    In our environment we have several applications that are installed that have a need to run a little prompt the first time they run eg Media player, Google earth etc. The problem is we have many users on many different computers. And the computers have deepfreeze running on them which removes the users profile once the computer is restarted. So next time that user logs in they have to go through the whole thing of run once again. I have managed to prevent IE runonce using group policy and office run once from using the office customisation tool. Is there a way to make this happen for other applications. On windows xp we used to copy a user that has run all the apps and place their default profile into the default profile so that new users get that profile template. Now with windows 7 the process of copy profiles is not as easy. Is there an easy way to copy profiles in win 7 or is there a better way (eg modify reg or app data) to prevent apps from performing an initial run. Thanks

    Read the article

  • Updated my WAMP Server and MySQL is eating up 580mB of memory

    - by Jon
    I updated my dev-box's WAMPSERVER, and along with updating PHP and Apache, MySQL updated to '5.6.12'. After doing that, I copied the data folder from my old (5.1.36) install to the new one and now MySQL takes up 580mB which is way too much, since I'm the only person using it (Locally) and there are only 20 or so databases on it, none of which have 'memory' tables. How can I get this down to a decent amount? My my.ini: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Database info: Storage Engine Data Size Index Size Total Size InnoDB 48.00 KB 0.00 B 48.00 KB MEMORY 0.00 B 0.00 B 0.00 B MyISAM 163.64 MB 122.49 MB 286.13 MB Total 163.69 MB 122.49 MB 286.18 MB

    Read the article

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • Certificate enrollment request chain not trusted

    - by makerofthings7
    I am working on a MSFT lab for Direct Access, and need to create a Web certificate. The instructions ask be to do the following: On EDGE1, click Start, type mmc, and then press ENTER. Click Yes at the User Account Control prompt. Click File, and then click Add/Remove Snap-ins. Click Certificates, click Add, click Computer account, click Next, select Local computer, click Finish, and then click OK. In the console tree of the Certificates snap-in, open Certificates (Local Computer)\Personal\Certificates. Right-click Certificates, point to All Tasks, and then click Request New Certificate. Click Next twice. On the Request Certificates page, click Web Server, and then click More information is required to enroll for this certificate. On the Subject tab of the Certificate Properties dialog box, in Subject name, for Type, select Common Name. In Value, type edge1.contoso.com, and then click Add. Click OK, click Enroll, and then click Finish. In the details pane of the Certificates snap-in, verify that a new certificate with the name edge1.contoso.com was enrolled with Intended Purposes of Server Authentication. Right-click the certificate, and then click Properties. In Friendly Name, type IP-HTTPS Certificate, and then click OK. Close the console window. If you are prompted to save settings, click No. In production, our company has overridden the Web Server template and it doesn't seem to be issuing certificates with the full CA chain. When I look at the issued certificate properties then both tiers of the 2 tier CA hierarchy are missing. How can I fix this? I'm not sure where to look outside the GUI.

    Read the article

  • Cookieless Domain redirect in WHM/cPANEL

    - by Patrick Lanfranco
    I am currently trying to get my head around in understanding how to set-up a "cookieless" domain using WHM / Cpanel - unfortunately without any success at this moment. I have a Magento store and I would like to use "cookieless domains" for my media, skin (template) and js files. Magento has a nice feature to define URL for those folders. My current setup is as follows: www.mydomain.com <- main store media.mydomain.com <- subdomain to the media folder (mydomain.com/media/) skin.mydomain.com <- subdomain to the media folder (mydomain.com/skin/) js.mydomain.com <- subdomain to the media folder (mydomain.com/js/) I think it's poinless to have them used as "cookieliess domains" since my Magento installation uses .mydomain.com as cookie domain, so what I would like to achieve is to register a new additional domain and have it point via WHM / cPanel to those specific locations. I have tried to change the A and CNAME records although without any success as they were just simply redirecting from one page to another in the browser (newdomain.com - jump to old.com). What kind of records do I have to set to have this working properly? Some advice would be highly appreciated.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >