Search Results

Search found 1317 results on 53 pages for 'webmaster sean'.

Page 41/53 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • set virtual host on Apache2.2 and PHP 5.3

    - by Avinash
    Hi I want to set my Virtual host on Apache 2.2. So, I can access my site using my IP address and Port number. Like http://192.168.101.111:429 for one site, http://192.168.101.111:420 for other site and so on. My machine OS in Windows 7. I have tried below in my httpd.conf file. Listen 192.168.101.83:82 #chaffoteaux <Directory "Path to project folder"> AllowOverride All </Directory> <VirtualHost 192.168.101.83:82> ServerAdmin [email protected] DirectoryIndex index.html index.htm index.php index.html.var DocumentRoot "Path to project folder" #ServerName dummy-host.example.com ErrorLog logs/Zara.log #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> Can you please suggest any thing missing in my configuration. Thanks in advance Avinash

    Read the article

  • Lost website rankings as a result of using a 302 redirect instead of a 301

    - by george peris
    I have an online store and during the last 2 days I noticed a big change in its SERPS for some keywords. The site is indexed in local Google for 5 keywords, and the online store is based on Magento 1.7. 5 days ago, I set up an SSL certificate in the online store in order to get better ranking results. I setup the SSL and followed the instructions from the Magento to make permanent 301 redirects from the HTTP URLs to the HTTPS URLs. I replaced all the URLs of the online store with HTTPS. When I saw in the SERPS that the rankings for some keywords were going up and down, I checked some URLs to see if all was going well with 301 redirects and found that the redirects where 302 and not 301, which is a big bug caused by Magento. I solved the problem with the .htaccess, but still the rankings for the homepage have disappeared. I fetched the site again using Google Webmaster Tools and am waiting to see. Can you please advise if I did something wrong, and what else I could do?

    Read the article

  • How do I receive email sent to postmaster?

    - by jonescb
    I have a VPS server that I would like to get an SSL certificate for, and the CA needs an email address to verify that I own the domain. The options are: [email protected], [email protected], [email protected], and an address to @whoisguard.com. The server runs CentOS 5, and all I have set up for email is sendmail. I don't have POP3 or IMAP. According to this Wikipedia article on Postmaster, it says that all SMTP servers support postmaster and it cites RFC 5321. Does sendmail conform to this? I tried sending a test mail to [email protected], but I don't know how to receive it on my server. Do I need to open up any ports? I haven't gotten a message back saying that my test mail failed to send, so my server must have gotten it.

    Read the article

  • apache virtual host concept and dns

    - by Subhransu
    I want to have around 60 repositories of projects and I want to serve them from a dedicated remote server(ubuntu) with the help of mercurial server so that all my developers will be able to update their changes. I have followed this article in order to do that but stocked in the Apache Configuration Step (section 2.5 2.5.4). I have some following questions: What are the steps I need to follow to make apache to serve /home/hg/repositories/private/hgweb.cgi when I enter dev.example.com/private ? Is my virtual host file is correct or do I need to change anything ? I bought the example.com and how to make it to serve dev.example.com/private. Do I need to add A name(like : subdomain.example.com and then IP of my server) in the cpannel of hosting company? ServerAdmin webmaster@localhost ServerName dev.example.com ScriptAlias /private /home/hg/repositories/private/hgweb.cgi <Directory /home/hg/repositories/private/> Options ExecCGI FollowSymlinks AddHandler cgi-script .cgi DirectoryIndex hgweb.cgi AuthType Basic AuthName "Mercurial repositories" AuthUserFile /home/hg/tools/hgusers Require valid-user </Directory> ErrorLog ${APACHE_LOG_DIR}/dev.example.com_error.log # Possible values include: debug, info, notice, warn, error, cr$ # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/dev.example.com_ssl_access.lo$ SSLEngine on SSLCertificateFile "/etc/apache2/ssl/dev.example.com.crt" SSLCertificateKeyFile "/etc/apache2/ssl/dev.example.com.k$ NOTE: The above is my virtual host file. I have not enabled the site yet and also not changed any host or hostname or httpd.conf file.

    Read the article

  • redirect to 404 wildcard subdomain

    - by Leandro Garcia
    I setup a wildcard A record on my domain registrar. Now if a user access a missing subdomain on my domain, they will be redirected to the homepage. Currently my initial setup was this: <VirtualHost *:80> ServerAdmin webmaster@localhost RewriteEngine On RewriteRule ^(.*)$ http://domain.com$1 [R] DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> # more below... </VirtualHost> Any wildcard subdomain or if my IP is entered via URL will redirect to the homepage. Can I do something about this that will redirect (HTTP redirect perhaps) the wildcard subdomains to 404 page instead of to homepage?

    Read the article

  • Duplicate content issue after URL-change with 301-redirects

    - by David
    We got the following problem: We changed all URLs on our page from oldURL.html to newURL.html and set up 301-redirects (ca. 600 URLs) Google re-crawled our page, indexed all the new URLs (newURL.html), but didn't crawl the old URLs (oldURL.html) again, as there were no internal links pointing at those domains anymore after the URL-change. This resulted in massive ranking-drops, etc. because (i) Google thought oldURL.html has exactly the same content as newURL, causing duplicate content issues, and (ii) Google did not transfer the juice from oldURL to newURL, because the 301-redirect was never noticed. Now we reset all internal Links to the old URLs again, which then redirect to the newURLs, in the hope that Google would re-crawl the pages, once there are internal links pointing at them. This is partially happening, but at a really low speed, so it would take multiple months to notice all-redirects. I guess, because Google thinks: "Aah, I already know oldURL.html, so no need to re-crawl it. Possible solutions we thought of are ... Submitting as many of the old URLs to the index as possible via Webmaster Tools, to manually trigger a crawl. Doing that already Submitting a sitemap with all old URLs - but not sure if good idea, because Google does not seem to like 301-redirects in a sitemap ... Both solutions are not perfect - and we cannot wait for three months, just to regain our old rankings. What are your ideas? Best, David

    Read the article

  • www.foobar.com works but foobar.com results in a 'Server not found' error

    - by Homunculus Reticulli
    I have just setup a minimal (hopefully secure? - comments welcome) apache website using the following configuration file: <VirtualHost *:80> ServerName foobar.com ServerAlias www.foobar.com ServerAdmin [email protected] DocumentRoot /path/to/websites/foobar/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.errors.log" <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I am able to access the website by using www.foobar.com, however when I type foobar.com, I get the error 'Server not found' - why is this? My second question concerns the security implications of the directive: <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> in the configuration above. What exactly is it doing, and is it necessary?. From my (admitedly limited) understanding of Apache configuration files, this means that anyone will be able to access (write to?) the /path/to/websites/ folder. Is my understanding correct? - and if yes, how is this not a security risk?

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • Is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

  • Config xampp never to cache pages from localhost

    - by Michael Mao
    Hello everyone: I am not a webmaster, and that was exactly the reason why I installed XAMPP 1.7.2 on Windows XP SP2 instead of manually configuring Apache, MySQL and PHP to cooperate each other. Now I am having to problem to disable caching pages from localhost. Some suggested just force the browser not to cache, using Firefox web developer bar or something similar; but I feel it would be better if I could configure the Apache server in XAMPP to never allow pages from localhost to be cached. I guess this is done somewhere in httpd.conf? LoadModule cache_module modules/mod_cache.so Would this module be helpful in this case? Doc here : mod_cache I am not very sure this would resolve the problem. Could anyone confirm this approach feasible? I'd like to work it out myself, given the fact that I am on the right track... Many thanks in advance

    Read the article

  • Access forbidden! using xampp on macosx 10.5

    - by erikvold
    I installed xampp back around January 2009, and CF8 to test coldfusion on my macbook (note: I do not think that this issue is related to CF, but only xampp). I only ever used the apache part of xampp, and this was working for over a year. In the last couple of months at the most I've started getting the following error message (even for none CF sites, and non .cfm pages, the error occurs for .html files..): Access forbidden! You don't have permission to access the requested object. It is either read-protected or not readable by the server. If you think this is a server error, please contact the webmaster Error 403 erikvold.lan Sun Mar 21 20:58:45 2010 Apache/2.2.11 (Unix) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.7l PHP/5.2.9 mod_perl/2.0.4 Perl/v5.10.0 As far as I recall I haven't made any change, so it's like it was working for a year then just stopped working..

    Read the article

  • Apache disabled virtual host domains resolve an enabled virtual host

    - by littleK
    I have three virtual hosts defined on apache on my Ubuntu server for three different domains. If I disable two of the virtual hosts (a2dissite) and try to resolve those two URL's in the browser, then the one remaining enabled site will resolve. How can I configure apache so that the domains for the disabled virtual hosts do not resolve? This is how all 3 virtual hosts are configured (info is masked): # domain: myfirstdomain.com # public: /home/me/public/myfirstdomain.com/ <VirtualHost *:80> # Admin email, Server Name (domain name), and any aliases ServerAdmin [email protected] ServerName www.myfirstdomain.com ServerAlias myfirstdomain.com # Index file and Document Root (where the public files are located) DirectoryIndex index.html index.php DocumentRoot /home/me/public/myfirstdomain.com/public # Log file locations LogLevel warn ErrorLog /home/me/public/myfirstdomain.com/log/error.log CustomLog /home/me/public/myfirstdomain.com/log/access.log combined </VirtualHost>

    Read the article

  • Oracle 5th Annual Maintenance Summit - Orlando March 22-23, 2011

    - by stephen.slade(at)oracle.com
    It's not too late to register today or tomorrow for this exclusive 'Maintenance Professionals Only" event.  In 4 tracks, 27 customer and partner speakers will present case studies and success stories in these 'no-sell zone' sessions. The take-aways will be worth attending!This "2 in 1" event combines a Customer Showcase featuring Orlando Utilities Commission (OUC) and Maintenance Summit.  OUC - the local municipal utility providing residential, commercial, and industrial customers with clean, reliable, and affordable electric and water services - will open the event with their CIO as keynote speaker, and host tours of their fleet, facility, and power generation operations. Recognized as a green leader, OUC has been the most reliable power provider in Florida the past 9 years due, in large part, to the operational efficiencies of its plant and asset maintenance systems. This Summit will feature breakout session tracks for EBS, JD Edwards, PeopleSoft and Sustainability. Highlights include over 12 Oracle solution demo stations, over 25 interactive breakout sessions, pool-side networking reception with live band, partner exhibit pavilion and special appearance by Sean D. Tucker, Team Oracle Stunt-Pilot!  Dates:                   March 22-23, 2011 Location:             Orlando World Center Marriott, Orlando, Florida Evite:                     http://www.oracle.com/us/dm/h2fy11/65971-nafm10019768mpp191c003-oem-304204.html Highlights:          Keynotes, Oracle Expert Demo Stations, Interactive Breakout Sessions, Networking Reception, Partner Pavilion, Speakers Tracks:                 EBS, JDE, PSFT, Sustainability Tours:                  Orlando Utility Operations, Fleet and Facility Oracle Demo Stations:  Agile, AutoVue, Primavera, MOC/SSDM, Utilities, PIM, PDQ, UCM, On Demand, Business Accelerators, Facilities Work Management, EBS Enterprise Asset Management, PeopleSoft Maintenance Management, Technology, Hardware/Sun. Partner-Sponsors:   Viziya, Global PTM, MiPro, Asset Management Solutions, Venutureforth, Impac Services, EAM Master, LLC, Meridium

    Read the article

  • Mostrar Imagenes en ListView utilizando ImageList WinForms

    - by Jason Ulloa
    El día de hoy veremos como trabajar con los controles ListView e Imagelist de WindowsForms para poder leer y mostrar una serie de imágenes. Antes de ello debo decir que pueden existir otras formas de mostrar imagenes que solo requieren un control por ejemplo con un Gridview pero eso será en otro post, ahora nos centraremos en la forma de realizarlo con los controles antes mencionados. Lo primero que haremos será crear un nuevo proyecto de windows forms, en mi caso utilizando C#, luego agregaremos un Control ImageList. Este control será el que utilicemos para almacenar todas las imágenes una vez que las hemos leído. Si revisamos el control, veremos que tenemos la opción de agregar la imágenes mediante el diseñador, es decir podemos seleccionarlas manualmente o bien agregarlas mediante código que será lo que haremos. Lo segundo será agregar un control ListView al Formulario, este será el encargado de mostrar las imagenes, eso sí, por ahora será solo mostrarlas no tendrá otras funcionalidades. Ahora, vamos al codeBehind y en el Evento Load del form empezaremos a codificar: Lo primero será, crear una nueva variable derivando DirectoryInfo, a la cual le indicaremos la ruta de nuestra carpeta de imágenes. En nuestro ejemplo utilizamos Application.StartUpPath para indicarle que vamos a buscar en nuestro mismo proyecto (en carpeta debug por el momento). DirectoryInfo dir = new DirectoryInfo(Application.StartupPath + @"\images");   Una vez que hemos creado la referencia a la ruta, utilizaremos un for para obtener todas la imágenes que se encuentren dentro del Folder que indicamos, para luego agregarlas al imagelist y empezar a crear nuestra nueva colección. foreach (FileInfo file in dir.GetFiles()) { try { this.imageList1.Images.Add(Image.FromFile(file.FullName)); }   catch { Console.WriteLine("No es un archivo de imagen"); } }   Una vez, que hemos llenado nuestro ImageList, entonces asignaremos al ListView sus propiedades, para definir la forma en que las imágenes se mostrarán. Un aspecto a tomar en cuenta acá será la propiedad ImageSize ya que está será la que definirá el tamaño que tendrán las imágenes en el ListView cuando sean mostradas. this.listView1.View = View.LargeIcon;   this.imageList1.ImageSize = new Size(120, 100);   this.listView1.LargeImageList = this.imageList1;   Por último y con ayuda de otro for vamos a recorrer cada uno de los elementos que ahora posee nuestro ImageList y lo agregaremos al ListView para mostrarlo for (int j = 0; j < this.imageList1.Images.Count; j++) { ListViewItem item = new ListViewItem();   item.ImageIndex = j;   this.listView1.Items.Add(item); } Como vemos, a pesar de que utilizamos dos controles distintos es realmente sencillo  mostrar la imagenes en el ListView al final el control ImageList, solo funciona como un “puente” que nos permite leer la imagenes para luego mostrarlas en otro control. Para terminar, los proyectos de ejemplo: C# VB

    Read the article

  • Google Accounts

    - by Alex
    Hi all, I need to setup a bunch of accounts (~50) with Google which will later be hooked up to Analytics and the Webmaster tools and I will access them via their APIs. The problem is that Google will stop me from making accounts at a certain point because it thinks I'm spamming it. I've tried bypassing the issue by proxying through Tor, but that didn't work (I suspect the Tor nodes are already abused and blacklisted). I also drove around town with my iPhone hopping towers trying to setup accounts. Also didn't work. Phone verification also doesn't work because my phone is already linked to too many accounts... So, I figure it's time to ask. How can I continue to setup Google accounts (NOT GMail...) without being considered a spammer? If there's a service from Google to whitelist my IP, even for a fee, I would be glad to sign up for it. Any suggestions will be appreciated!

    Read the article

  • VHOST not working in Apache

    - by Starx
    I got a LAMP server working in Ubuntu 11.04. Now the problem is that the websites have to enabled and disabled from the terminal. All all of them have to be accessed from http://localhost which is not so much efficient. So I created a VHOSTS, using some tutorials off the net. Here is the coding for it <VirtualHost *:80> ServerAdmin webmaster@localhost Servername site.com ServerAlias www.site.com DocumentRoot /home/starx/public_html/site/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/starx/public_html/site/public> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> nGen ErrorLog ${APACHE_LOG_DIR}/site-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/site-access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Now, still I can't access the page in http://site.com but if i access using http://localhsot/ it is accessed. I have disabled all others site including default and have just enabled one site i.e. site How to fix this?

    Read the article

  • How do I rate limit google's crawl of my class C IP block?

    - by Zak
    I have several sites in a class C network that all get crawled by google on a pretty regular basis. Normally this is fine. However, when google starts crawling all the sites at the same time, the small set of servers that back this IP block can take a pretty big hit on load. With google webmaster tools, you can rate limit the googlebot on a given domain, but I haven't found a way to limit the bot across an IP network yet. Anyone have experience with this? How did you fix it?

    Read the article

  • How do I get google to see keywords on a one page web application site?

    - by David
    I'm going to have to link to the web site to explain this, http://www.diagram.ly, it's a free service, so I hope this doesn't break advertising rules. Basically, it's a one page web application, I don't want to create a web site for it. Some background text loads and if JavaScript is enabled, the web application itself then loads. The problem is that Google only seems to be picking up the title of the page and the text on the footer, so the site only appear on Google search for very limited text (based on the title and meta description mostly). I was hoping that search engines would pick up on the background text and index that. The text is factual, not keyword stuffed. Yahoo seems to pick up the text, just not Google. Does anyone have any experience of how Google would view such a site and where I could put the text for a better result? Edit I should mention that Google Webmaster Tools lists the site keywords as "Component, diagramly, feed, mxgraph, share and twitter". Basically the footer and little else.

    Read the article

  • Warning: DocumentRoot * does not exist on Centos 6

    - by user1213807
    My virtual hosts lines: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/docs/example.com ServerName example.com ErrorLog logs/example.com-error_log CustomLog logs/example.com.com-access_log common </VirtualHost> And when I apache restart (sudo apachectl -k stop) get this error: Warning: DocumentRoot [/www/docs/example.com] does not exist I've checked some ways: All files and directories permissions is OK, everything 755. I think, maybe this error about SeLinux and disable it. But not working. Still same error. How can I fix this problem?

    Read the article

  • Which shopping cart / ecommerce platform to choose?

    - by fabien7474
    I need to build an ecommerce website within a tight budget and schedule. Of course, I have never done that before, so I have googled out what my solutions are and I have concluded that the following were not valid candidates anymore : Magento : Steep learning curve osCommerce : old, bad design, buggy and not user-friendly Zencart, CRE Loaded, CubeCart : based on osCommerce Virtuemart, uberCart, eCart : based on CMS (Joomal, Drupal, WordPress) that is not necessary for my use-case So I finally narrowed down my choices to these solutions : PrestaShop : easy-to-use, great templating engine (smarty) but many modules are not free buy yet indispensable OpenCart : security issues and not a great support from the main developer. See here and here. So, as you can see, I am a little bit confused and if you can help me choosing an easy-to-use, lightweight and cheap (not-necessarily free) ecommerce solution, I would really appreciate. By the way, I am a Java/Grails programmer but I am also familiar with PHP and .NET. (not with Python or Ruby/Rails) EDIT: It seems that this question is more appropriate for the Webmaster StackExchange site. So please move this question to where it belongs (I cannot do that) instead of downvoting it. BTW, I have found out a question quite similar on SO (http://stackoverflow.com/questions/3315638/php-ecommerce-system-which-one-is-easiest-to-modify) which is quite popular.

    Read the article

  • Why My Adsense Account is not accepted

    - by Muhammad Adeel Zahid
    I have created a blog on blogger few days ago and created two blog entries on it. when i try creating AdSense account with blog's address it does not accept the application due to PageType. i have searched around on the net and found that its probably due to duplicate or insufficient content on the blog but either of my blog entry is more than 2,000 words and i have literally typed 80 percent of its content with only few code blocks copied from other sites. Below is content of email i received as response to my application. Hello, Thank you for your interest in Google AdSense. Unfortunately, after reviewing your application, we're unable to accept you into AdSense at this time. We did not approve your application for the reasons listed below. Issues: - Page Type Further detail: Page Type: In order to participate in Google AdSense, publishers' websites and application information must satisfy the following guidelines: Your website must be your own top-level domain (www.example.com and not www.example.com/mysite). You must provide accurate personal information with your application that matches the information on your domain registration. Your website must contain substantial, original content. Your site must comply with Google AdSense program policies: https://www.google.com/adsense/policies" which include Google's webmaster quality guidelines: http://www.google.com/support/webmasters/bin/answer.py?answer=35769#quality . If your site satisfies the above criteria in the future, please resubmit your application and we'll review it as soon as possible My Personal credentials are also same as they appear in my google account. i can't figure out the problem. Any Help/Suggestion is highly appreciated. regards

    Read the article

  • CodePlex Daily Summary for Wednesday, February 29, 2012

    CodePlex Daily Summary for Wednesday, February 29, 2012Popular ReleasesZXing.Net: ZXing.Net 0.4.0.0: sync with rev. 2196 of the java version important fix for RGBLuminanceSource generating barcode bitmaps Windows Phone demo client (only tested with emulator, because I don't have a Windows Phone) Barcode generation support for Windows Forms demo client Webcam support for Windows Forms demo clientOrchard Project: Orchard 1.4: Please read our release notes for Orchard 1.4: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesFluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 1.2: New features: - QueryValues method - Added support for automapping to enumerations (both int and string are supported). Fixed 2 reported issues.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.15: 3.6.0.15 28-Feb-2012 • Fix: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Work Item 10435: http://netsqlazman.codeplex.com/workitem/10435 • Fix: Made StorageCache thread safe. Thanks to tangrl. • Fix: Members property of SqlAzManApplicationGroup is not functioning. Thanks to tangrl. Work Item 10267: http://netsqlazman.codeplex.com/workitem/10267 • Fix: Indexer are making database calls. Thanks to t...SCCM Client Actions Tool: Client Actions Tool v1.1: SCCM Client Actions Tool v1.1 is the latest version. It comes with following changes since last version: Added stop button to stop the ongoing process. Added action "Query update status". Added option "saveOnlineComputers" in config.ini to enable saving list of online computers from last session. Default value for "LatestClientVersion" set to SP2 R3 (4.00.6487.2157). Wuauserv service manual startup mode is considered healthy on Windows 7. Errors are now suppressed in checkReleases...Document.Editor: 2012.1: Whats new for Document.Editor 2012.1: Improved Recent Documents list Improved Insert Shape Improved Dialogs Minor Bug Fix's, improvements and speed upsKinect PowerPoint Control: Kinect PowerPoint Control v1.1: Updated for Kinect SDK 1.0.SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8: API Updates: SOLID Extract Method for Archives (7Zip and RAR). ExtractAllEntries method on Archive classes will extract archives as a streaming file. This can offer better 7Zip extraction performance if any of the entries are solid. The IsSolid method on 7Zip archives will return true if any are solid. Removed IExtractionListener was removed in favor of events. Unit tests show example. Bug fixes: PPMd passes tests plus other fixes (Thanks Pavel) Zip used to always write a Post Descri...Social Network Importer for NodeXL: SocialNetImporter(v.1.3): This new version includes: - Download new networks for Facebook fan pages. - New options for downloading more posts - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuASP.NET REST Services Framework: Release 1.1 - Standard version: Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when building REST API functionality within your application. It also includes ability to apply Filters to a class to target all WebRest methods, as well as some performance enhancements. New version includes Metadata Explorer providing ability exploring the existing services that becomes essential as the number ...SQL Live Monitor: SQL Live Monitor 1.31: A quick fix to make it this version work with SQL 2012. Version 2 already has 2012 working, but am still developing the UI in version 2, so this is just an interim fix to allow user to monitor SQL 2012.DotNet.Highcharts: DotNet.Highcharts 1.1 with Examples: Fixed small bug in JsonSerializer about the numbers represented as string. Fixed Issue 310: decimal values don't work Fixed Issue 345: Disable Animation Refactored Highcharts class. Implemented Issue 341: More charts on one page. Added new class Container which can combine and display multiple charts. Usage: new Container(new[] { chart1, chart2, chart3, chart4 }) Implemented Feature 302: Inside an UpdatePanel - Added method (InFunction) which create the Highchart inside JavaScript f...Content Slider Module for DotNetNuke: 01.02.00: This release has the following updates and new features: Feature: One-Click Enabling of Pager Setting Feature: Cache Sliders for Performance Feature: Configurable Cache Setting Enhancement: Transitions can be Selected Bug: Secure Folder Images not Viewable Bug: Sliders Disappear on Postback Bug: Remote Images Cause Error Bug: Deleted Images Cause Error System Requirements DotNetNuke v06.00.00 or newer .Net Framework v3.5 SP1 or newer SQL Server 2005 or newerImage Resizer for Windows: Image Resizer 3 Preview 3: Here is yet another iteration toward what will eventually become Image Resizer 3. This release is stable. However, I'm calling it a preview since there are still many features I'd still like to add before calling it complete. Updated on February 28 to fix an issue with installing on multi-user machines. As usual, here is my progress report. Done Preview 3 Fix: 3206 3076 3077 5688 Fix: 7420 Fix: 7527 Fix: 7576 7612 Preview 2 6308 6309 Fix: 7339 Fix: 7357 Preview 1 UI...Finestra Virtual Desktops: 2.5.4500: This is a bug fix release for version 2.5. It fixes several things and adds a couple of minor features. See the 2.5 release notes for more information on the major new features in that version. Important - If Finestra crashes on startup for you, you must install the Visual C++ 2010 runtime from http://www.microsoft.com/download/en/details.aspx?id=5555. Fixes a bug with window animations not refreshing the screen on XP and with DWM off Fixes a bug with with crashing on XP due to a bug in t...AutoExpandOver - Show popup on mouseover, focus, click in Silverlight: AutoExpandOver 2.1: Many fixes. All leaks are gone. Features added.FileSquirrel: FileSquirrel Alpha 1.2.1 32bit and x64: FileSquirrel Alpha 1.2.1Publishing SCSM Work Item to a Sharepoint Calendar: PublishWI command line tool (Updated): Updated for SCSM 2012 (RC) PublishWI command line tool Usage: PublishWI.exe WIID URL CalendarName WIID: ID of the Work Item to publish (for example, CR3333) URL: URL of the SharePoint site, such as http://www.sharepoint.com CalendarName: The name of the sharepoint calendar to publish WI to.HttpRider - Tool for Web Site Performance and Stress Tests: HttpRider 1.0: Please let me know any issues you face. ThanksMedia Companion: MC 3.432b Release: General Now remembers window location. Catching a few more exceptions when an image is blank TV A couple of UI tweaks Movies Fixed the actor name displaying HTML Fixed crash when using Save files as "movie.nfo", "movie.tbn", & "fanart.jpg" New CSV template for HTML output function Added <createdate> tag for HTML output A couple of UI tweaks Known Issues Multiepisodes are not handled correctly in MC. The created nfo is valid, but they are not displayed in MC correctly & saving the...New Projects.NET Implementation of Extensions for Financial Services: NXFS is planned to be a compliant .Net implementation of CEN/XFS (currently version 3.20) to overcome some deficits of native implementation such as not supporting highly productive programming languages (like C#) and supporting only Windows XP and x86 platform.BigCoder WebMaster Tools: BigCoder WebMaster ToolsBigCoder Whois Program: BigCoder Whois ProgrambiscuitCMS: biscuitCMS makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>. BridgeSeismic: BridgeSeismicData Normilizer: Decomposer for SQL fact tables, for BI projectsDownload Indexed Cache: "Download Indexed Cache" implements the Bing API Version 2 to retrieve content indexed within the Bing Cache to support the "Search Engine Reconnaissance" section of the OWASP Testing Guide v3. FITSExplorer: FITS Explorer is designed to allow astronomers to quickly and easily browse and preview the image and metadata stored in FITS files. The application was developed using primarily WPF and C#, with some low-level data manipulation routines written in C++ for high-performance.ICalSync: icalsyncLeague Manager: Nosso gerenciador de campeonatosLemcube SISTRI: interfacciamento ai servizi di Interoperabilità SIS.: Lemcube SISTRI: interfacciamento ai servizi di Interoperabilità SIS.LittlePluginLib: A small plugin system based on the .net framework 3.5Log reading: Reading and parsing huge log files, download from ftp server and upload to msql database.Mail Aggregator Service: To address the problem of being 'spammed' by your own alert emails this service was created to group/batch mail messages sent to the same 'to' address. It can integrate into QuickMon or even be used as a stand-alone tool to send out alert notifications. MangoTicTacToe: Demo application of a simple Silverlight game for Windows Phone 7. This tictactoe game features some of the most basics requirements for making a Silverlight game for Windows Phone 7.MonkeyButt: Web alternatives to commercial software like Facebook and Google.PizzaSoft: PizzaSoftproject: project demoSharePoint 2010 Automatically Generated Solution Demo: In this demo project I show how to generate Sandboxed Solutions by code at runtime.SoftService: Software ServiceSPAdmin: SharePoint WarmUp ToolSSASNet: SSASNetSSIS Data Masker: A SSIS Data Flow Transformation Component To Provide Basic Data Masking Capabilities.UpiGppf: This is a high shcool projectXMPP/Media Library for .NET and Windows Phone 7.5: .NET libraries for XMPP, TLS, RTP, STUN, SOCKS and more for windows and windows phone.

    Read the article

  • How to find the Fastest DNS servers to host our domain?

    - by Denis Volovik
    The question was born because lately we've seen a pretty odd (well, at least for us, for the first time) - error message in Google webmaster tools - "DNS lookup timeout" ... I was pretty sure that with eNom's 5 DNS servers (dns1... to dns5.name-services.com) we're pretty set... But it appears that from (Europe/Hungary), for example - dns1.name-services.com takes 170ms. to respond on a ping... while GoDaddy's ns75.domaincontrol.com - takes only 40 ms. to respond... and at the same time - dns2 to dns5.name-services.com - each result with a timeout error (on ping)... This issue came to our attention right in the final stages of optimizing our web-site (almost to death) - basically, just in time... I would love to move our domains to a fast (fastest?) and reliable DNS server.. - but how do I find one ? Also - I did the ping tests from various geographic locations (we have servers in many countries) and GoDaddy seemed to be faster than eNom almost in every case. I'd be very thankful for any hints on this! Edited: Well.. maybe this one does not have an answer, after all...

    Read the article

  • Merging multiple top-level domains into a single domain

    - by user23089
    My client had multiple top-level-domains. Each one represented an insurance program within a specific vertical. For all the sites at these alternate domains, there was a 30/70 mix of duplicate vs. original content. Some of the alternate domains ranked very well for their target keyphrase groups, where others were absent in results pages. We advised the client to merge multiple domains into their existing main domain, for usability and SEO reasons. We recently ran the merger. Here was our process: On the main domain, transfer the content such that it matches 1-for-1 content on the various alternate domains Setup Google Webmaster Tools on the main domain Push the new content on the main domain live and submit a corresponding sitemap to Google Establish 301 redirects on the alternate domains, such that each alternate domain URL points to its respective page on the main domain We did this 12 days ago, and pages (previously on the alternate domains) that had ranked well on Google have now plummeted or are entirely non-existent. Did we do the right thing by merging multiple top-level domains into a single domain? Is this initial dip in rankings normal? How soon should we expect to see it return to its normal rankings?

    Read the article

  • When is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >