Search Results

Search found 34252 results on 1371 pages for 'html tag'.

Page 567/1371 | < Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >

  • I cant figure out my PHP problem. Can anyone with PHP codes? [closed]

    - by Jeffery
    when I click the submit button it gives me an error page. Here is the site http://nealconstruction.com/estimate.html $emailSubject = 'Estimate' $webMaster = '[email protected]' /* Gathering Info */ $emailField = $_POST ['email']; $nameField = $_POST ['name']; $phoneField = $_POST ['phone']; $typeField = $_POST ['type']; $locationField = $_POST ['location']; $infoField = $_POST ['info']; $contactField = $_POST ['contact']; $body = <<<EOD Email: $email Name: $name Phone Number: $phone Type Of Job: $type Location: $location Additional Info: $info How to Contact: $contact EOD; $headers = "From: $email\r\n"; $headers .= "Content-Type: text/html\r\n"; $success = mail($webMaster; $emailSubject; $body; $headers); /* Results rendered as html */ $theResults = << JakesWorks - travel made easy-Homepage Thank you for your information! We will contact you very soon! EOD; echo "$theResults"; ?

    Read the article

  • Chrome will not load a web page with an <embed> element

    - by rossmcm
    I have been trying to get a simple sound web page going: Sound.html <script> function PlaySound () { var sounder = document.getElementById ("ToneA") ; sounder.Play () ; } </script> <embed id="ToneA" height="1" width="1" src="https://dl.dropboxusercontent.com/u/311035/ToneA.mp3" autostart="false" enablejavascript="true"//> <button onclick="PlaySound () ;">Play</button> The test web page is here. It plays in IE, but not in Firefox or Chrome. My problem: Chrome reports "Could not load VLC Plugin". It seems to be a known problem that the VLC community don't necessarily feel like fixing at the moment, and is a result of Google choosing not to allow some certain kind of plugin. If I disable the plugin I no longer get the message but nothing happens when I click the button. Looking at the console in a debug window I see Uncaught TypeError: undefined is not a function Sound.html:7 PlaySound Sound.html:7 onclick which suggests Chrome could not find anything else to handle the sound file. How to I tell Chrome to use (e.g.) Windows Media Player? * UPDATE * This is apparently because the VLC plugin is a NPAPI plugin and Chrome no longer supports these. I have uninstalled VLC and this has removed the error on loading the webpage with an embedded sound element, but it still doesn't invoke WMP instead.

    Read the article

  • Reference manager for Ubuntu

    - by user36511
    I'm in dire need of a reference/citation manager in Ubuntu. The features I need the most are: 1) Metadata extraction/editing of pdf 2) Fetch metadata from online databases such as Google Scholar 3) Attach pdf or other file to reference 4) Tag references and recall those with a given tag or set of tags 5) Provide APA style citation for references (in integration with OOffice and/or Latex) Optional: Would be great if it can annotate/highlight pdfs. Mendeley probably does all of these, but it's behavior has driven me insane, especially when the number of references it's trying to handle is large. It constantly tries to sync with the web and creates duplicate references. I've tried JabRef, and while it looks like a decent piece of freeware, it doesn't do some of the above. I found others like Bibus, Referencer, etc. to be lacking or buggy or inactive development. Is there another option, or should I give up the search.

    Read the article

  • Using nginx to rewrite urls inside outgoing responses

    - by Kev
    We have a customer with a site running on Apache. Recently the site has been seeing increased load and as a stop gap we want to shift all the static content on the site to a cookieless domains, e.g. http://static.thedomain.com. The application is not well understood. So to give the developers time to amend the code to point their links to the static content server (http://static.thedomain.com) I thought about proxying the site through nginx and rewriting the outgoing responses such that links to /images/... are rewritten as http://static.thedomain.com/images/.... So for example, in the response from Apache to nginx there is a blob of Headers + HTML. In the HTML returned from Apache we have <img> tags that look like: <img src="/images/someimage.png" /> I want to transform this to: <img src="http://static.thedomain.com/images/someimage.png" /> So that the browser upon receiving the HTML page then requests the images directly from the static content server. Is this possible with nginx (or HAProxy)? I have had a cursory glance through the docs but nothing jumped out at me except rewriting inbound urls.

    Read the article

  • How to configure a Web.Config file to allow custom 404 handling while still displaying on-page 500 error detail?

    - by Mark
    To customize 404 handling and based on the hosting company's suggestion, we are currently using the following web.config setup. However, we quickly realized that with this configuration, any page error (500 error) are also getting redirected to this custom error page. How can I modify this config file so we can continue to handle 404 with custom file while still able to view on-page error? <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.webServer> <httpErrors errorMode="DetailedLocalOnly" defaultPath="/Custom404.html" defaultResponseMode="ExecuteURL"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="/Custom404.html" responseMode="ExecuteURL" /> </httpErrors> </system.webServer> <system.web> <customErrors mode="On"> <error statusCode="404" redirect="/Custom404.html" /> </customErrors> </system.web> </configuration>

    Read the article

  • Setting up Virtual Host in Fedora Core 15 using apache

    - by Roland
    I'm trying to setup a couple of Virtual Host files on my Localhost PC running Fedora Core 15. Now I get this working, but now onloy one Virtual Host site works, and if I type in 127.0.0.1/test/testApp.php which is not related to the Virtual Host site , I get redirected to the Virtual Host site. Here's what I did. I created a new folder called virtualhosts in /etc/httpd/ where all my host files are stored in the following format site.conf In /etc/conf/httpd.conf I enabled NameVirtualHost *:80 and included the host files at the bottom of the config page like this Include virtualhosts/*.conf In /etc/hosts I added the line 127.0.0.1 website No when I run sudo httpd -t I get Syntax OK I restart apache and then the Virtualhost works, but as soon as I add other hosts and only use 127.0.0.1 as above it still links to the original host. Am I doing anything wrong here or left out something? An example of my Virtual Host file looks like this <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html/website/ ServerName website ServerAlias website ErrorLog logs/dev-error_log CustomLog logs/dev-access_log common Alias /blog /var/www/html/blog/ <Directory /var/www/html/website/> Options FollowSymLinks Allow Override All Order allow,deny allow from all </Directory> #php_value error_reporting E_ALL & ~E_NOTICE & ~E_DEPRECATED php_flag display_errors On php_value date.timezone Europe/London </VirtualHost>

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

  • Komodo Edit - How to disable the 'Linter' for a language?

    - by TM.
    I've been using Komodo Edit to work on a Django project. It works great except for one little annoyance: When I am editing Django template files, Komodo likes to put red squiggly lines underneath the first HTML tag that follows a Django tag, because it thinks it is an invalid HTML doc (although it isn't, it just has Django template tags/filters in it). Note that this red squiggly line is called a "Linter error" in the docs that I can find. Is there some way to turn off this red squiggly for only a specific type of language? It's nice to have when working on Python code but it's annoying to have a red squiggly on every single one of my Django templates.

    Read the article

  • Make nginx config like apache2 virtualhosts

    - by user2104070
    I have web server with apache2 with many subdomains on it like, domain.com, abc.domain.com, def.domain.com etc. etc. Now I got a new nginx server and want to set it up like apache2, so to test I created configs (2 files in /etc/nginx/sites-available/ and link to them from sites-enabled/) as shown, domain.config: server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /srv/www/; index index.html index.htm; # Make site accessible from http://localhost/ server_name domain.com; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } } abc-domain config: server { listen 80; listen [::]:80; root /srv/www/tmp1/; index index.html index.htm; # Make site accessible from http://localhost/ server_name abc.domain.com; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } } but when I access with domain.com I am getting index.html from /var/www/tmp1 only. Is there something I'm doing wrong in the nginx config?

    Read the article

  • Redirect physical keyboard input to SSH

    - by Dimme
    I'm having a raspberry pi running debian linux and I have an RFID reader connected to it. The RFID reader behaves like a keyboard. Every time I scan a tag it types then number of the tag and then carriage return. My problem is that I want to redirect the output of the RFID reader to my SSH session. That means anything that is typed to the physical keyboard of the pi should be displayed in my SSH window. I have tried with: cat /dev/tty0 but it wont work because the user is not logged in. Is there a way to disable the login screen after the pi boots and then redirect all input through SSH?

    Read the article

  • trouble executing php scripts with nginx

    - by lovesh
    My nginx config looks like this server { listen 80; server_name localhost; location / { root /var/www; index index.php index.html; autoindex on; } location /folder1 { root /var/www/folder1; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location /folder2 { root /var/www/folder2; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } The problem with the above setup is that i am not able to execute php files. Now as per my understanding of nginx config rules, when i am in my webroot(/) which is /var/www the value of $document_root becomes /var/www so when i request for localhost/hi.php the fastcgi_param SCRIPT_FILENAME becomes /var/www/hi.php and that is the actual path of the php script. Similarly when i request for localhost/folder1/hi.php the $document_root becomes /var/www/folder1 because this is specified as the root in folder1's location block so again the fastcgi_param SCRIPT_FILENAME becomes /var/www/folder1/hi.php. But because the above configuration does not work so there is something wrong with my understanding. Please help?

    Read the article

  • apache access and error log written in same file

    - by user196075
    i have issue that access and error log are written in same file ! , the configuration in virtualhosts.conf as the following : <VirtualHost *:80> ServerName ************ ServerAdmin support@************8 DocumentRoot /var/www/html/*********.com ErrorLog /var/log/httpd/********/********.com_error_log CustomLog /var/log/httpd/********/********.com_access_log combined <Directory /var/www/html/***********.com> Options -Indexes FollowSymLinks AllowOverride All </Directory> </VirtualHost> as you see from the configuration each access and error logs should be save separately , but both logs are written in *.com_access_log , i have double check all permission , group and owner ... can't find anything wrong previous error in log file : [Thu Sep 19 14:15:02 2013] [error] [client 192.168.10.54] client denied by server configuration: /var/www/html/**********/show_has_offers.php i have tried to generate same error , i can find the hit in access log only as the following : 192.168.10.75 - - [24/Oct/2013:08:11:14 +0000] "GET /show_has_offers.php HTTP/1.1" 404 1586 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0" 0 17332 and nothing in error log !! Please advice ...

    Read the article

  • PHP pages are not parsed by Apache on CentOS

    - by Ram
    I have installed Centos 5.x, Apache 2.2, PHP 5.3 and MySQL 5.5. I also installed phpMyAdmin. I am able to access phpMyAdmin through the browser without any issues. However, when I create a simple index.php with phpinfo() function in the default directory, that page is served without php parsing. As we all know, phpMyAdmin is a php application. This is working fine from the same server but not the simple php page from the doc root directory ??!!!. Of course, I tried moving this page into phpMyAdmin folder and tried accessing it, but no success. Please note that I updated httpd.conf file with appropriate directives based on the php installation guide.Following directives were added to httpd.conf. AddTyoe application/x-httpd-php LoadModule php5_module /usr/lib/httpd/modules/libphp5.so <FilesMatch "\.php$"> SetHandler application/x-httpd-php </FilesMatch> File locations are: docroot - /var/www/html phpMyAdmin folder - /var/www/html/phpMyAdmin File privileges are: [root@linuxdev1 html]# ls -Z -rwxr-xr-x root root index.php drwxr-xr-x root root phpMyAdmin -rw-r--r-- root root phpMyAdmin-3.4.3.2-english.tar.gz drwxr-xr-x root root test1 Any help is appreciated.

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Strange Domain name under the same IP Address

    - by Mike Chip
    There's something really weird happening in my server. But first things first: I wanted to have my website and chose the domain name "myowndomain.com", Now on my domain registrar I point "myowndomain.com" to the address of my recently setup VPS, let's say 50.50.50.50 So I installed everything I needed to run my website, and I started to notice strange queries coming from different IP Addresses. Like these [client 123.123.123.123] File does not exist: /var/www/html/api, referer: http://www.strangedomain.com/api/manyou/my.php [client 456.456.456.456] File does not exist: /var/www/html/api, referer: http://www.strangedomain.com/api/manyou/my.php or like this (Really a long line, I cut some things) GET /?s=vod-show-id-22-area-%E5%85%B6%E4%BB%96-language-%E9%9F%A9%E8%AF%AD.html HTTP/1.1" 301 295 "http://v.strangedomain.com/?s=vod-s ...[cut]... spider" That above is happening the most. The 'strangedomain.com' returns the same IP address of my VPS which my website is hosted on. The whois of such domain shows it's registered to a chinese. But the street name didn't look so right (like a huge single word), so I think all of that info might be fake, but still might be a chinese. I also noticed that all 'clients' trying to access the 'strangedomain.com' is coming from china. If I type in the browser 'strangedomain.com', I see my website. I'm worried, because my website is actually an e-commerce. I don't know if 'strangedomain.com' WAS a website on 50.50.50.50 in the not so far past, or if it's something else.

    Read the article

  • Notepad++ or other application to replace string with numeric function

    - by user38963
    Hello. I am using Notepad++ and attempting to replace all numeric strings within a specific tag (xxxxof my XML document with a new string which has been modified by a specific variable. Here's an example (please remove the space between < and start): < start501234< /start round(501234 * 0.9) = 451111 < start451111< /start Is there a way to automatically find all numeric values within the start tag and do replace it by that same value multiplied by 0.9? I don't have to use Notepad++ if there's another tool that can do this. Thanks!

    Read the article

  • nginx root directory not forwarding correctly

    - by user66700
    The server files are store in /var/www/ Everything was working perfectly, then I've been getting the following errors 2011/01/28 17:20:05 [error] 15415#0: *1117703 "/var/www/https:/secure.domain.com/index.html" is not found (2: No such file or directory), client: 119.110.28.211, server: secure.domain.com, request: "HEAD /https://secure.domain.com/ HTTP/1.1", host: "secure.domain.com" Heres my config: server { server_name secure.domain.com; listen 443; listen [::]:443 default ipv6only=on; gzip on; gzip_comp_level 1; gzip_types text/plain text/html text/css application/x-javascript text/xml text/javascript; error_log logs/ssl.error.log; gzip_static on; gzip_http_version 1.1; gzip_proxied any; gzip_disable "msie6"; gzip_vary on; ssl on; ssl_ciphers RC4:ALL:-LOW:-EXPORT:!ADH:!MD5; keepalive_timeout 0; ssl_certificate /root/server.pem; ssl_certificate_key /root/ssl.key; location / { root /var/www; index index.html index.htm index.php; } }

    Read the article

  • Faster caching method

    - by pataroulis
    I have a service that provides HTML code which at some point it is not updated anymore. The code is always generated dynamically from a database with 10 million entries so each HTML code page rendering searches there for say 60 or 70 of those entries and then renders the page. So, for those expired pages, I want to use a caching system which will be VERY simple (like just enter a record with the rendered HTML and (if I need) remove it). I tried to do it file-based but the search for the existence of a file and then passing it through php to actually render it , seems like too much for what I want to do. I was thinking of doing it on mysql with a table with MEDIUMBLOBs (each page is around 100k). It would hold about 150000 such records (for now, at least). My question is: Would it be faster to let mysql do the lookup of the file and the passing to php or is the file-based approach faster? The lookup code for the file based version looks like this: $page = @file_get_contents(getCacheFilename($pageId)); if($page!=NULL) { echo $page; } else { renderAndCachePage($pageId); } which does one lookup whether it finds the file or not. The mysql table would just have an ID (the page id) and the blob entry. The disk of the system is a simple SATA raid 1 , the mysql daemon can grab up to 2.5GB of memory (i have a proxy running too, eating the rest of the 16GB of the machine. ) In general the disk is quite busy already. My not using PEAR cache, is because I think (please feel free to correct me on this) it adds overhead I do not need because the page rendering code is called about 2M times per day and I wouldn't want to go through the whole code each time (and yes, I have eaccelerator to cache the code too). Any pointer to what direction I should go, would be greatly welcome. Thanks!

    Read the article

  • Virtual hosting in lighttpd?

    - by lighttpdnewbie
    Ok, here it goes... I've seen some other posts dealing with this, but it didn't help that much. I am using windows XP. My problem is with trying to get lighttpd working with virtual hosts. Now, I managed to get everything up and working with the default /htdocs and the default page shows up just fine on the internet, but since I have several sites to host, I need virtual hosting. I managed to do it in apache, so I guessed it would work out just fine in lighttpd, but apparently I'm missing something. Ok, let's say I have domain (www.)example.org. I want everyone using that url going to the correct index.html, obviously. Let's say that index.html is in directory "websites/website1" placed under the lighttpd dir. (thus, the full path is c:/ProgramsFiles/lighttpd/websites/website1/index.html) Now: how, exactly, do I set up my virtual host (in the config file)? In detail, please, since I've tried for hours with the vague hints I got from fora and such, but it doesn't work. Also; is there something additional to do? Change the "server.bind" or get rid of the default server.document-root, or something? I appreciate all the help you can give! Especially if it's a verbatim/step-by-step solution you're offering! ;-p Edit: And, yes, my mod_simple_vhost has been enabled.

    Read the article

  • backup util for binary/media files. (to use with source control)

    - by acidzombie24
    I am using git for my source control. I dont backup media such as gifs, pngs, etc. I am thinking everytime i tag a release it would be a good idea to backup the media files as well. But i dont want to make several copies of the same file each time i create a tag. I'd like an app to handle checking if the file already exists and handles restoring everything to a version i like What util might i use to do this? I'm using windows 7.

    Read the article

  • Autopostback select lists in ASP.NET MVC using jQuery

    - by rajbk
    This tiny snippet of code show you how to have your select lists autopostback its containing form when the selected value changes. When the DOM is fully loaded, we get all select nodes that have an attribute of “data-autopostback” with a value of “true”. We wire up the “change” JavaScript event to all these select nodes. This event is fired as soon as the user changes their selection with the mouse.  When the event is fired, we find the closest form tag for the select node that raised the event and submit the form. $(document).ready(function () { $("select:[data-autopostback=true]").change(function () { $(this).closest("form").submit(); }); }); A select tag with autopostback enabled will look like this <select id="selCategory" name="Category" data-autopostback="true"> <option value='1'>Electronics</option> <option value='2'>Books</option> </select> The reason I am using “data-" suffix in the attribute is to be HTML5 Compliant. A custom data attribute is an attribute in no namespace whose name starts with the string "data-", has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z). The snippet can be used with any HTML page.

    Read the article

  • When the canonical page itself changes url

    - by lulalala
    This is a continuation of the question: How to handle canonical url changes like Stack Overflow. Say I have the canon url: questions/11/car <---canonically-linked-from--- questions/11/ What will happen if I want to change the canon url to questions/11/car-with-sgx Obviously, questions/11/ will point to the new canon url. But how should the old questions/11/car change to the new one? There are two ways: 301 redirect that to new canon url the old canon url canonically link to the new canon url According to this post: [By using canonical link instead of redirect,] OldPage.html’s rankings will drop over time due to fewer internal links, but the canonical tag won’t make it disappear entirely. It could theoretically remain in their index until one of the following occurs: it is redirected permanently via 301 it returns a 404 for an extended period of time (they will keep checking for a while before dropping a URL) a meta robots “noindex” tag is added If this is true, I really need to use redirect from old canon url to the new canon url, which means I need to keep a log of previous old canon urls of this content, so I know when I can redirect. This is a bit of a hassle to do.

    Read the article

  • Simple Branching and Merging with SVN

    Its a good idea not to do too much work without checking something into source control.  By too much work I mean typically on the order of a couple of hours at most, and certainly its a good practice to check in anything you have before you leave the office for the day.  But what if your changes break the build (on the build server you do have a build server dont you?) or would cause problems for others on your team if they get the latest code?  The solution with Subversion is branching and merging (incidentally, if youre using Microsoft Visual Studio Team System, you can shelve your changes and share shelvesets with others, which accomplishes many of the same things as branching and merging, but is a bit simpler to do). Getting Started Im going to assume you have Subversion installed along with the nearly ubiquitous client, TortoiseSVN.  See my previous post on installing SVN server if you want to get it set up real quick (you can put it on your workstation/laptop just to learn how it works easily enough). Overview When you know you are going to be working on something that you wont be able to check in quickly, its a good idea to start a branch.  Its also perfectly fine to create the branch after-the-fact (have you ever started something thinking it would be an hour and 4 hours later realized you were nowhere near done?).  In any event, the first thing you need to do is create a branch.  A branch is simply a copy of the current trunk (a typical subversion setup has root directories called trunk, tags, and branches its a good idea to keep this and to put your branches in the branches folder).  Once you have a new branch, you need to switch your working copy so that it is bound to your branch.  As you work,  you may want to merge in changes that are happening in the trunk to your branch, and ultimately when you are done youll want to merge your branch back into the trunk.  When done, you can delete your branch (or not, but it may add clutter).  To sum up: Create a new branch Switch your local working copy to the new branch Develop in the branch (commit changes, etc.) Merge changes from trunk into your branch Merge changes from branch into trunk Delete the branch Create a new branch From the root of your repository, right-click and select TortoiseSVN > Branch/tag as shown at right (click to enlarge).  This will bring up the Copy (Branch / Tag) interface.  By default the From WC at URL: should be pointing at the trunk of your repository.  I recommend (after ensuring that you have the latest version) that you choose to make the copy from the HEAD revision in the repository (the first radio button).  In the To URL: textbox, you should change the URL from /trunk to /branches/NAME_OF_BRANCH.  You can name the branch anything you like, but its often useful to give it your name (if its just for your use) or some useful information (such as a datestamp or a bug/issue ID from that it relates to, or perhaps just the name of the feature you are adding. When youre done with that, enter in a log message for your new branch.  If you want to immediately switch your local working copy to the new branch/tag, check the box at the bottom of the dialog (Switch working copy to new branch/tag).  You can see an example at right. Assuming everything works, you should very quickly see a window telling you the Copy finished, like the one shown below: Switch Local Working Copy to New Branch If you followed the instructions above and checked the box when you created your branch, you dont need to do this step.  However, if you have a branch that already exists and you would like to switch over to working on it, you can do so by using the Switch command.  Youll find it in the explorer context menu under TortoiseSVN > Switch: This brings up a dialog that shows you your current binding, and lets you enter in a new URL to switch to: In the screenshot above, you can see that Im currently bound to a branch, and so I could switch back to the trunk or to another branch.  If youre not sure what to enter here, you can click the [] next to the URL textbox to explore your repository and find the appropriate root URL to use.  Also, the dropdown will show you URLs that might be a good fit (such as the trunk of the current repository). Develop in the Branch Once you have created a branch and switched your working copy to use it,  you can make changes and Commit them as usual.  Your commits are now going into the branch, so they wont impact other users or the build server that are working off of the trunk (or their own branches).  In theory you can keep on doing this forever, but practically its a good idea to periodically merge the trunk into your branch, and/or keep your branches short-lived and merge them back into the trunk before they get too far out of sync. Merge Changes from Trunk into your Branch Once you have been working in a branch for a little while, change to the trunk will have occurred that youll want to merge into your branch.  Its much safer and easier to integrate changes in small increments than to wait for weeks or months and then try to merge in two very different codebases.  To perform the merge, simply go to the root of your branch working copy and right click, select TortoiseSVN->Merge.  Youll be presented with this dialog: In this case you want to leave the default setting, Merge a range of revisions.  Click Next.  Now choose the URL to merge from.  You should select the trunk of your current repository (which should be in the dropdownlist, or you can click the [] to browse your repository for the correct URL).  You can leave everything else blank since you want to merge everything: Click Next.  Again you can leave the default settings.  If you want to do something more granular than everything in the trunk, you can select a different Merge depth, to include merging just one item in the tree.  You can also perform a Test merge to see what changes will take place before you click Merge (which is often a good idea).  Heres what the dialog should look like before you click Merge: After clicking Merge (or Test merge) you should see a confirmation like this (it will say Test Only in the title if you click Test merge): Now you should build your solution, run all of your tests, and verify that your branch still works the way it should, given the updates that youve just integrated from the trunk.  Once everything works, Commit your changes, and then continue with your work on the branch.  Note that until you commit, nothing has actually changed in your branch on the server.  Other team members who may also be working in this branch wont be impacted, etc.  The Merge is purely a client-side operation until you perform a Commit. In a more real-world scenario, you may have conflicts.  When you do, youll be presented with a dialog like this one: Its up to you which option you want to go with.  The more frequently you Merge, the fewer of these youll have to deal with.  Also, be very sure that youre merging the right folders together.  If you try and merge your trunk with some subfolder in your branchs structure, youll end up with all kinds of conflicts and problems.  Fortunately, theyre only on your working copy (unless you commit them!) but if you see something like that, be sure to doublecheck your URL and your local file location. Merge Your Branch Back Into Trunk When youre done working in your branch, its time to pull it back into the trunk.  The first thing you should do is follow the previous steps instructions for merging the latest from the trunk into your branch.  This lets you ensure that what you have in your branch works correctly with the current trunk.  Once youve done that and committed your changes to your branch, youre ready to proceed with this step. Once youre confident your branch is good to go, you should go to its root folder and select TortoiseSVN->Merge (as above) from the explorer right-click menu.  This time, select Reintegrate a branch as shown below: Click Next.  Youll want it to merge with the trunk, which should be the default: Click Next. Leave the default settings: Click Test merge to see a test, and then if all looks good, click Merge.  Note that if you havent checked in your working copy changes, youll see something like this: If on the other hand things are successful: After this step, its likely you are finished working in your branch.  Dont forget to use the ToroiseSVN->Switch command to change your working copy back to the trunk. Delete the Branch You dont have to delete the branch, but over time your branches area of your repository will get cluttered, and in any event if theyre not actively being worked on the branches are just taking up space and adding to later confusion.  Keeping your branches limited to things youre actively working on is simply a good habit to get into, just like making sure your codebase itself remains tidy and not filled with old commented out bits of code. To delete the branch after youre finished with it, the simplest thing to do is choose TortoiseSVN->Repo Browser.  From there, assuming you did this from your branch, it should already be highlighted.  In any event, navigate to your branch in the treeview on the left, and then right-click and select Delete.  Enter a log message if youd like: Click OK, and its gone.  Dont be too afraid of this, though.  You can still get to the files by viewing the log for branches, and selecting a previous revision (anything before the delete action): If for some reason you needed something that was previously in this branch, you could easily get back to any changeset you checked in, so you should have absolutely no fear when it comes to deleting branches youre done with.   Resources If youre using Eclipse, theres a nice write-up of the steps required by Zach Cox that I found helpful here. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Bulletin Board System with tagging, email notification

    - by user678220
    I am looking for nice BBS system, Bulletin Board System, Discussion Board, or nice in-company communication platform. There are lots of people, about 30 people, joining in our project. We would like to share idea among us on that platform. We can post questions and concerns related with the project, and we would like to respond each other. Here is my list of functionality I want: Tagging Thread e.g) Announcement, Finance, Legal, Idea. One thread can have multiple Tags. members can set on/off to receive email when new comments are posted. They can set on/off on each Tag. e.g) one member on to receive email related with "Announcement", but off to receive "Finance". Thread owner can change threads' tag any time. Thread can have several type of post. Thread can be "vote" thread. Everyone can vote their opinion. Thread can be "action plan" thread. In this thread, "who" will "what" remains in the thread. By viewing all "action plan" thread, all action plans needed in the company is visualized.

    Read the article

  • Pagination, Duplicate Content, and SEO

    - by Iamtotallylost
    Please consider a list of items (forum comments, articles, shoes, doesn't matter) which are spread over multiple pages. Different sort orders are supported (by date, by popularity, by price, etc). So, an URL might look like this (I use the query style here to simplify things): /items?id=1234&page=42&sort=popularity /items?id=1234&page=5&sort=date Now, in terms of SEO, I think I should be worried about duplicate content. After all, each item appears at least as many times as there are sort orders. I've seen Matt Cutts talking about the rel=canonical link tag, but he also said that the canonical page should have very similar content. But this is not the case here because page #1 in a non-canonical sort order might have completely different items than page #1 in the canonical sort order. For a given non-canonical page, there is no clear canonical page listing all the same items, so I think rel=canonical won't help here. Then I thought about using the noindex meta tag on all pages with non-canonical sort order, and not using it on all pages with canonical sort order. However, if I use that method, what will happen with backlinks that are going to non-canonical pages -- will they still spread their page rank juice, even though the first page googlebot (or any other crawler) is going to encounter is marked as "noindex"? Can you please comment on my problem and what you think is the best solution? If you think you have a better solution, please consider that 1) I do not want to use Javascript for this, 2) I do not want all the items to be on one page. Thank you.

    Read the article

< Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >