Search Results

Search found 41414 results on 1657 pages for 'page views'.

Page 399/1657 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Need help with PHP web app bootstrapping error potentially related to Zend [migrated]

    - by Matt Shepherd
    I am trying to get a program called OpenFISMA running on an Ubuntu AMI in AWS. The app is not really coded on the Ubuntu platform, but I am in my comfort zone there, and have tried both CentOS and OpenSUSE (both are sort of "native" for the app) for getting it working with the same or worse results. So, why not just get it working on Ubuntu? Anyway, the app is found here: www.openfisma.org and an install guide is found here: https://openfisma.atlassian.net/wiki/display/030100/Installation+Guide The install guide kind of sucks. It doesn't list dependencies in any coherent way or provide much of any detail (does not even mention Zend once on the entire page) so I've done a lot of work to divine the information I do have. This page provided some dependency inf (though again, Zend is not mentioned once): https://openfisma.atlassian.net/wiki/display/PUBLIC/RPM+Management#RPMManagement-BasicOverviewofRPMPackages Anyway, I got all the way through the install (so far as I could reconstruct it). I am going to the login page for the first time, and there should be some sort of bootstrapping occurring when I load the page. (I am not a programmer so I have no idea what it is doing there.) Anyway, I get a message on the web page that says: "An exception occurred while bootstrapping the application." So, then I go look in /var/www/data/logs/php.log and find this message: [22-Oct-2013 17:29:18 UTC] PHP Fatal error: Uncaught exception 'Zend_Exception' with message 'No entry is registered for key 'Zend_Log'' in /var/www/library/Zend/Registry.php:147 Stack trace: #0 /var/www/public/index.php(188): Zend_Registry::get('Zend_Log') #1 {main} thrown in /var/www/library/Zend/Registry.php on line 147 This occurs every time I load the page. I gather there is an issue related to registering the Zend_Log variable in the Zend registry, but other than that I really have no idea what to do about it. Am I missing a package that it needs, or is this app not coded to register the variables properly? I have no clue. Any help is greatly appreciated. The application file referenced in the log message (index.php) is included below. <?php /** * Copyright (c) 2008 Endeavor Systems, Inc. * * This file is part of OpenFISMA. * * OpenFISMA is free software: you can redistribute it and/or modify it under the terms of the GNU General Public * License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later * version. * * OpenFISMA is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied * warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more * details. * * You should have received a copy of the GNU General Public License along with OpenFISMA. If not, see * {@link http://www.gnu.org/licenses/}. */ try { defined('APPLICATION_PATH') || define( 'APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application') ); // Define application environment defined('APPLICATION_ENV') || define( 'APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'production') ); set_include_path( APPLICATION_PATH . '/../library/Symfony/Components' . PATH_SEPARATOR . APPLICATION_PATH . '/../library' . PATH_SEPARATOR . get_include_path() ); require_once 'Fisma.php'; require_once 'Zend/Application.php'; $application = new Zend_Application( APPLICATION_ENV, APPLICATION_PATH . '/config/application.ini' ); Fisma::setAppConfig($application->getOptions()); Fisma::initialize(Fisma::RUN_MODE_WEB_APP); $application->bootstrap()->run(); } catch (Zend_Config_Exception $zce) { // A zend config exception indicates that the application may not be installed properly echo '<h1>The application is not installed correctly</h1>'; $zceMsg = $zce->getMessage(); if (stristr($zceMsg, 'parse_ini_file') !== false) { if (stristr($zceMsg, 'application.ini') !== false) { if (stristr($zceMsg, 'No such file or directory') !== false) { echo 'The ' . APPLICATION_PATH . '/config/application.ini file is missing.'; } elseif (stristr($zceMsg, 'Permission denied') !== false) { echo 'The ' . APPLICATION_PATH . '/config/application.ini file does not have the ' . 'appropriate permissions set for the application to read it.'; } else { echo 'An ini-parsing error has occured in ' . APPLICATION_PATH . '/config/application.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } } else if (stristr($zceMsg, 'database.ini') !== false) { if (stristr($zceMsg, 'No such file or directory') !== false) { echo 'The ' . APPLICATION_PATH . '/config/database.ini file is missing.<br/>'; echo 'If you find a database.ini.template file in the config directory, edit this file ' . 'appropriately and rename it to database.ini'; } elseif (stristr($zceMsg, 'Permission denied') !== false) { echo 'The ' . APPLICATION_PATH . '/config/database.ini file does not have the appropriate ' . 'permissions set for the application to read it.'; } else { echo 'An ini-parsing error has occured in ' . APPLICATION_PATH . '/config/database.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } } else { echo 'An ini-parsing error has occured. <br/>Please check all configuration files and make sure ' . 'everything is setup correctly'; } } elseif (stristr($zceMsg, 'syntax error') !== false) { if (stristr($zceMsg, 'application.ini') !== false) { echo 'There is a syntax error in ' . APPLICATION_PATH . '/config/application.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } elseif (stristr($zceMsg, 'database.ini') !== false) { echo 'There is a syntax error in ' . APPLICATION_PATH . '/config/database.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } else { echo 'A syntax error has been reached. <br/>Please check all configuration files and make sure ' . 'everything is setup correctly.'; } } else { // Then the exception message says nothing about parse_ini_file nor 'syntax error' echo 'Please check all configuration files, and ensure all settings are valid.'; } echo '<br/>For more information and help on installing OpenFISMA, please refer to the ' . '<a target="_blank" href="http://manual.openfisma.org/display/ADMIN/Installation">' . 'Installation Guide</a>'; } catch (Doctrine_Manager_Exception $dme) { echo '<h1>An exception occurred while bootstrapping the application.</h1>'; // Does database.ini have valid settings? Or is it the same content as database.ini.template? $databaseIniFail = false; $iniData = file(APPLICATION_PATH . '/config/database.ini'); $iniData = str_replace(chr(10), '', $iniData); if (in_array('db.adapter = ##DB_ADAPTER##', $iniData)) { $databaseIniFail = true; } if (in_array('db.host = ##DB_HOST##', $iniData)) { $databaseIniFail = true; } if (in_array('db.port = ##DB_PORT##', $iniData)) { $databaseIniFail = true; } if (in_array('db.username = ##DB_USER##', $iniData)) { $databaseIniFail = true; } if (in_array('db.password = ##DB_PASS##', $iniData)) { $databaseIniFail = true; } if (in_array('db.schema = ##DB_NAME##', $iniData)) { $databaseIniFail = true; } if ($databaseIniFail) { echo 'You have not applied the settings in ' . APPLICATION_PATH . '/config/database.ini appropriately. ' . 'Please review the contents of this file and try again.'; } else { if (Fisma::debug()) { echo '<p>' . get_class($dme) . '</p><p>' . $dme->getMessage() . '</p><p>' . "<p><pre>Stack Trace:\n" . $dme->getTraceAsString() . '</pre></p>'; } else { $logString = get_class($dme) . "\n" . $dme->getMessage() . "\nStack Trace:\n" . $dme->getTraceAsString() . "\n"; Zend_Registry::get('Zend_Log')->err($logString); } } } catch (Exception $exception) { // If a bootstrap exception occurs, that indicates a serious problem, such as a syntax error. // We won't be able to do anything except display an error. echo '<h1>An exception occurred while bootstrapping the application.</h1>'; if (Fisma::debug()) { echo '<p>' . get_class($exception) . '</p><p>' . $exception->getMessage() . '</p><p>' . "<p><pre>Stack Trace:\n" . $exception->getTraceAsString() . '</pre></p>'; } else { $logString = get_class($exception) . "\n" . $exception->getMessage() . "\nStack Trace:\n" . $exception->getTraceAsString() . "\n"; Zend_Registry::get('Zend_Log')->err($logString); } }

    Read the article

  • What Gets Measured Gets Managed

    - by steve.diamond
    OK, so if I were to claim credit for inventing that expression, I guess I could share the mantle with Al Gore, creator of the Internet. But here's the point: How many of us acquire CRM systems without specifically benchmarking several key performance indicators across sales, marketing and service BEFORE and AFTER deployment of said system? Yes, this may sound obvious and it might provoke the, "Well of course, Diamond!" response, but is YOUR company doing this? Can you define in quantitative terms the delta across multiple parameters? I just trolled the Web site of one of my favorite sales consultancy firms, The Alexander Group. Right on their home page is a brief appeal citing the importance of benchmarking. The corresponding landing page states, "The fact that hundreds of sales executives now track how their sales forces spend time means they attach great value to understanding how much time sellers actually devote to selling." The opportunity is to extend this conversation to benchmarking the success that companies derive from the investment they make in CRM systems, i.e., to the automation side of the equation. To a certain extent, the 'game' is analogous to achieving optimal physical fitness. One may never quite get there, but beyond the 95% threshold of "excellence," she/he may be entering the realm of splitting infinitives. But at the very start, and to quote verbiage from the aforementioned Alexander Group Web page, what gets measured gets managed. And getting to that 95% level along several key indicators would be a high quality problem indeed, don't you think? Yes, this could be a "That's so 90's" conversation, but is it really?

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >

    Read the article

  • Oracle Desktop Virtualization at HIMSS 2011

    - by chris.kawalek(at)oracle.com
    The HIMSS Conference is an extremely important industry trade show put on by The Healthcare Information and Management Systems Society. It's being held in Florida starting this Sunday, February 20th. Their slogan, "Linking people, potential, and progress" could be true of Oracle desktop virtualization as well! The Oracle desktop virtualization group has worked very closely with the Oracle healthcare business unit to have a large presence at this show, and I wanted to tell you a bit about what we're doing: - All Oracle demos are being done on Sun Ray Clients That's right, every demo pod in the large Oracle booth will have a Sun Ray Client with each demo tied to a smart card. Too many people at your demo station? Pop your card out and go to a different one. We'll also be demoing Oracle desktop virtualization at a dedicated demo station, too. This is great stuff! Find Oracle at booth #1651 Oracle's page about HIMSS - Focus Group - Caregiver Mobility with Oracle Sun Ray Clients and Desktop Virtualization Feb 22, 3:15-4:15 PM This focus group will be for customers interested in Oracle desktop virtualization. It's invitation only, but you can comment on this blog post and we can give you info on how to attend (your comment won't be made public). - Solution Session - Fast, Secure, Workflow Optimized: Inexpensive Access to Care Information is Possible Inside and Outside of the Hospital Feb 23, 4:15 PM Booth #685, Wireless and Mobility Theatre Oracle's Adam Workman will cover caregiver mobility and the benefits of Oracle desktop virtualization to healthcare organizations. - New healthcare solutions page on oracle.com We've created a page dedicated to content involving desktop virtualization and healthcare. This will be your onestop shop if looking for desktop virtualization and healthcare information. - New desktop virtualization and healthcare solution data sheet This document outlines how we define "Caregiver Mobility" and how Oracle products are used to facilitate quicker, more secure access to patient data. We'll have some more updates from the show next week. It looks like its going to be an exciting event! -Chris

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google Updates Google Pack – Pushes Firefox and Skype Away

    - by Gopinath
    Google Pack is a must to install software package on every new Windows PC. With a single installer Google Pack delivers all the useful Google applications like Gtalk, Google Earth, Picasa, etc. and third party applications Firefox, Skype, Adobe Reader. Today Google updated Google Pack collection and removed competitor products like Firefox and Skype from main page and pushed them to background. The main page of Google Pack now showcases the following software: Google Apps, Google Picasa, Adobe Reader, Google Toolbar for IE, Google Desktop, avast free antivirus, Google Chrome and Google Earth. It’s still possible to install Firefox and Skype through Google Pack by clicking on the link “All Applicatoins” available on the right side menu and selecting the required installers.  As most of the users use the main page to pick the showcased software, Firefox and Skype are going to loose much of Google Pack love. Thanks labnol This article titled,Google Updates Google Pack – Pushes Firefox and Skype Away, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • April 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the April 2013 release of the Ajax Control Toolkit. For this release, we focused on improving two controls: the AjaxFileUpload and the MaskedEdit controls. You can download the latest release from CodePlex at http://AjaxControlToolkit.CodePlex.com or, better yet, you can execute the following NuGet command within Visual Studio 2010/2012: There are three builds of the Ajax Control Toolkit: .NET 3.5, .NET 4.0, and .NET 4.5. A Better AjaxFileUpload Control We completely rewrote the AjaxFileUpload control for this release. We had two primary goals. First, we wanted to support uploading really large files. In particular, we wanted to support uploading multi-gigabyte files such as video files or application files. Second, we wanted to support showing upload progress on as many browsers as possible. The previous version of the AjaxFileUpload could show upload progress when used with Google Chrome or Mozilla Firefox but not when used with Apple Safari or Microsoft Internet Explorer. The new version of the AjaxFileUpload control shows upload progress when used with any browser. Using the AjaxFileUpload Control Let me walk-through using the AjaxFileUpload in the most basic scenario. And then, in following sections, I can explain some of its more advanced features. Here’s how you can declare the AjaxFileUpload control in a page: <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload ID="AjaxFileUpload1" AllowedFileTypes="mp4" OnUploadComplete="AjaxFileUpload1_UploadComplete" runat="server" /> The exact appearance of the AjaxFileUpload control depends on the features that a browser supports. In the case of Google Chrome, which supports drag-and-drop upload, here’s what the AjaxFileUpload looks like: Notice that the page above includes two Ajax Control Toolkit controls: the AjaxFileUpload and the ToolkitScriptManager control. You always need to include the ToolkitScriptManager with any page which uses Ajax Control Toolkit controls. The AjaxFileUpload control declared in the page above includes an event handler for its UploadComplete event. This event handler is declared in the code-behind page like this: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to App_Data folder AjaxFileUpload1.SaveAs(MapPath("~/App_Data/" + e.FileName)); } This method saves the uploaded file to your website’s App_Data folder. I’m assuming that you have an App_Data folder in your project – if you don’t have one then you need to create one or you will get an error. There is one more thing that you must do in order to get the AjaxFileUpload control to work. The AjaxFileUpload control relies on an HTTP Handler named AjaxFileUploadHandler.axd. You need to declare this handler in your application’s root web.config file like this: <configuration> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" maxRequestLength="42949672" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> Notice that the web.config file above also contains configuration settings for the maxRequestLength and maxAllowedContentLength. You need to assign large values to these configuration settings — as I did in the web.config file above — in order to accept large file uploads. Supporting Chunked File Uploads Because one of our primary goals with this release was support for large file uploads, we added support for client-side chunking. When you upload a file using a browser which fully supports the HTML5 File API — such as Google Chrome or Mozilla Firefox — then the file is uploaded in multiple chunks. You can see chunking in action by opening F12 Developer Tools in your browser and observing the Network tab: Notice that there is a crazy number of distinct post requests made (about 360 distinct requests for a 1 gigabyte file). Each post request looks like this: http://localhost:24338/AjaxFileUploadHandler.axd?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&fileId=B7CCE31C-6AB1-BB28-2940-49E0C9B81C64 &fileName=Sita_Sings_the_Blues_480p_2150kbps.mp4&chunked=true&firstChunk=false Each request posts another chunk of the file being uploaded. Notice that the request URL includes a chunked=true parameter which indicates that the browser is breaking the file being uploaded into multiple chunks. Showing Upload Progress on All Browsers The previous version of the AjaxFileUpload control could display upload progress only in the case of browsers which fully support the HTML5 File API. The new version of the AjaxFileUpload control can display upload progress in the case of all browsers. If a browser does not fully support the HTML5 File API then the browser polls the server every few seconds with an Ajax request to determine the percentage of the file that has been uploaded. This technique of displaying progress works with any browser which supports making Ajax requests. There is one catch. Be warned that this new feature only works with the .NET 4.0 and .NET 4.5 versions of the AjaxControlToolkit. To show upload progress, we are taking advantage of the new ASP.NET HttpRequest.GetBufferedInputStream() and HttpRequest.GetBufferlessInputStream() methods which are not supported by .NET 3.5. For example, here is what the Network tab looks like when you use the AjaxFileUpload with Microsoft Internet Explorer: Here’s what the requests in the Network tab look like: GET /WebForm1.aspx?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&poll=1&guid=9206FF94-76F9-B197-D1BC-EA9AD282806B HTTP/1.1 Notice that each request includes a poll=1 parameter. This parameter indicates that this is a polling request to get the size of the file buffered on the server. Here’s what the response body of a request looks like when about 20% of a file has been uploaded: Buffering to a Temporary File When you upload a file using the AjaxFileUpload control, the file upload is buffered to a temporary file located at Path.GetTempPath(). When you call the SaveAs() method, as we did in the sample page above, the temporary file is copied to a new file and then the temporary file is deleted. If you don’t call the SaveAs() method, then you must ensure that the temporary file gets deleted yourself. For example, if you want to save the file to a database then you will never call the SaveAs() method and you are responsible for deleting the file. The easiest way to delete the temporary file is to call the AjaxFileUploadEventArgs.DeleteTemporaryData() method in the UploadComplete handler: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to a database table e.DeleteTemporaryData(); } You also can call the static AjaxFileUpload.CleanAllTemporaryData() method to delete all temporary data and not only the temporary data related to the current file upload. For example, you might want to call this method on application start to ensure that all temporary data is removed whenever your application restarts. A Better MaskedEdit Extender This release of the Ajax Control Toolkit contains bug fixes for the top-voted issues related to the MaskedEdit control. We closed over 25 MaskedEdit issues. Here is a complete list of the issues addressed with this release: · 17302 MaskedEditExtender MaskType=Date, Mask=99/99/99 Undefined JS Error · 11758 MaskedEdit causes error in JScript when working with 2-digits year · 18810 Maskededitextender/validator Date validation issue · 23236 MaskEditValidator does not work with date input using format dd/mm/yyyy · 23042 Webkit based browsers (Safari, Chrome) and MaskedEditExtender · 26685 MaskedEditExtender@(ClearMaskOnLostFocus=false) adds a zero character when you each focused to target textbox · 16109 MaskedEditExtender: Negative amount, followed by decimal, sets value to positive · 11522 MaskEditExtender of AjaxtoolKit-1.0.10618.0 does not work properly for Hungarian Culture · 25988 MaskedEditExtender – CultureName (HU-hu) > DateSeparator · 23221 MaskedEditExtender date separator problem · 15233 Day and month swap in Dynamic user control · 15492 MaskedEditExtender with ClearMaskOnLostFocus and with MaskedEditValidator with ClientValidationFunction · 9389 MaskedEditValidator – when on no entry · 11392 MaskedEdit Number format messed up · 11819 MaskedEditExtender erases all values beyond first comma separtor · 13423 MaskedEdit(Extender/Validator) combo problem · 16111 MaskedEditValidator cannot validate date with DayMonthYear in UserDateFormat of MaskedEditExtender · 10901 MaskedEdit: The months and date fields swap values when you hit submit if UserDateFormat is set. · 15190 MaskedEditValidator can’t make use of MaskedEditExtender’s UserDateFormat property · 13898 MaskedEdit Extender with custom date type mask gives javascript error · 14692 MaskedEdit error in “yy/MM/dd” format. · 16186 MaskedEditExtender does not handle century properly in a date mask · 26456 MaskedEditBehavior. ConvFmtTime : function(input,loadFirst) fails if this._CultureAMPMPlaceholder == “” · 21474 Error on MaskedEditExtender working with number format · 23023 MaskedEditExtender’s ClearMaskOnLostFocus property causes problems for MaskedEditValidator when set to false · 13656 MaskedEditValidator Min/Max Date value issue Conclusion This latest release of the Ajax Control Toolkit required many hours of work by a team of talented developers. I want to thank the members of the Superexpert team for the long hours which they put into this release.

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an ID property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side ID value into a client-side id attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side ID property and the generated client-side id, but in reality things aren't so simple. By default, the rendered client-side id is formed by taking the Web control's ID property and prefixed it with the ID properties of its naming containers. In short, a Web control with an ID of txtName can get rendered into an HTML element with a client-side id like ctl00_MainContent_txtName. This default translation from the server-side ID property value to the rendered client-side id attribute can introduce challenges when trying to access an HTML element via JavaScript, which is typically done by id, as the page developer building the web page and writing the JavaScript does not know what the id value of the rendered Web control will be at design time. (The client-side id value can be determined at runtime via the Web control's ClientID property.) ASP.NET 4.0 affords page developers much greater flexibility in how Web controls render their ID property into a client-side id. This article starts with an explanation as to why and how ASP.NET translates the server-side ID value into the client-side id value and then shows how to take control of this process using ASP.NET 4.0. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an ID property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side ID value into a client-side id attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side ID property and the generated client-side id, but in reality things aren't so simple. By default, the rendered client-side id is formed by taking the Web control's ID property and prefixed it with the ID properties of its naming containers. In short, a Web control with an ID of txtName can get rendered into an HTML element with a client-side id like ctl00_MainContent_txtName. This default translation from the server-side ID property value to the rendered client-side id attribute can introduce challenges when trying to access an HTML element via JavaScript, which is typically done by id, as the page developer building the web page and writing the JavaScript does not know what the id value of the rendered Web control will be at design time. (The client-side id value can be determined at runtime via the Web control's ClientID property.) ASP.NET 4.0 affords page developers much greater flexibility in how Web controls render their ID property into a client-side id. This article starts with an explanation as to why and how ASP.NET translates the server-side ID value into the client-side id value and then shows how to take control of this process using ASP.NET 4.0. Read on to learn more! Read More >

    Read the article

  • ASP.NET 4.0- Html Encoded Expressions

    - by Jalpesh P. Vadgama
    We all know <%=expression%> features in asp.net. We can print any string on page from there. Mostly we are using them in asp.net mvc. Now we have one new features with asp.net 4.0 that we have HTML Encoded Expressions and this prevent Cross scripting attack as we are html encoding them. ASP.NET 4.0 introduces a new expression syntax <%: expression %> which automatically convert string into html encoded. Let’s take an example for that. I have just created an hello word protected method which will return a simple string which contains characters that needed to be HTML Encoded. Below is code for that. protected static string HelloWorld() { return "Hello World!!! returns from function()!!!>>>>>>>>>>>>>>>>>"; } Now let’s use the that hello world in our page html like below. I am going to use both expression to give you exact difference. <form id="form1" runat="server"> <div> <strong><%: HelloWorld()%></strong> </div> <div> <strong><%= HelloWorld()%></strong> </div> </form> Now let’s run the application and you can see in browser both look similar. But when look into page source html in browser like below you can clearly see one is HTML Encoded and another one is not. That’s it.. It’s cool.. Stay tuned for more.. Happy Programming Technorati Tags: ASP.NET 4.0,HTMLEncode,C#4.0

    Read the article

  • Antenna Aligner Part 8: It’s Alive!!!

    - by Chris George
    Finally the day has come, Antenna Aligner v1.0.1 has been uploaded to the AppStore and . “Waiting for review” .. . fast forward 7 days and much checking of emails later WOO HOO! Now what? So I set my facebook page to go live  https://www.facebook.com/AntennaAligner, and started by sending messages to my mates that have iphones! Amazingly a few of them bought it! Similarly some of my colleagues were also kind enough to support me and downloaded it too! Unfortunately the only way I knew they had bought is was from them telling me, as the iTunes connect data is only updated daily at about midday GMT. This is a shame, surely they could provide more granular updates throughout the day? Although I suppose once an app has been out in the wild for a while, daily updates are enough. It would, however, be nice to get a ping when you make your first sale! I would have expected more feedback on my facebook page as well, maybe I’m just expecting too much, or perhaps I’ve configured the page wrong. The new facebook timeline layout is just confusing, and I’m not sure it’s all public, I’ll check that! So please take a look and see what you think! I would love to get some more feedback/reviews/suggestions… Oh and watch out for the Android version coming soon!

    Read the article

  • ASP.NET Meta Keywords and Description

    - by Ben Griswold
    Some of the ASP.NET 4 improvements around SEO are neat.  The ASP.NET 4 Page.MetaKeywords and Page.MetaDescription properties, for example, are a welcomed change.  There’s nothing earth-shattering going on here – you can now set these meta tags via your Master page’s code behind rather than relying on updates to your markup alone.  It isn’t difficult to manage meta keywords and descriptions without these ASP.NET 4 properties but I still appreciate the attention SEO is getting.  It’s nice to get gentle reminder via new coding features that some of the more subtle aspects of one’s application deserve thought and attention too.  For the record, this is how I currently manage my meta: <meta name="keywords"     content="<%= Html.Encode(ConfigurationManager.AppSettings["Meta.Keywords"]) %>" /> <meta name="description"     content="<%= Html.Encode(ConfigurationManager.AppSettings["Meta.Description"]) %>" /> All Master pages assume the same keywords and description values as defined by the application settings.  Nothing fancy. Nothing dynamic. But it’s manageable.  It works, but I’m looking forward to the new way in ASP.NET 4.

    Read the article

  • How To Change The Screen Resolution in C#

    - by SAMIR BHOGAYTA
    All programmers are facing common problem is how to change screen Resolution dynamically. In .Net 2005 it's very easy to change the screen resolution. Here We will explain you how can we get the Screen resolution and how we will change the resolution at dynamically and while unloading the page it will come as it was before. In dot net we can access the values of user's screen resolution through the Resolution class. It also affects all running (and minimized) programs. Page_Load Function Screen Srn = Screen.PrimaryScreen; tempHeight = Srn.Bounds.Width; tempWidth = Srn.Bounds.Height; Page.ClientScript.RegisterStartupScript (this.GetType(), "Error", "alert('" + "Your Current Resolution is = " + tempHeight + " * " + tempWidth + "');"); //if you want Automatically Change res.at page load. //please uncomment this code. if (tempHeight == 600)//if the system is 800*600 Res.then change to { FixHeight = 768; FixWidth = 1024; Resolution.CResolution ChangeRes = new Resolution.CResolution(FixHeight, FixWidth); } Change Resoultion in C# switch (cboRes.SelectedValue.ToString()) { case "800*600": FixHeight = 800; FixWidth = 600; Resolution.CResolution ChangeRes600 = new Resolution.CResolution(FixHeight, FixWidth); break; case "1024*768": FixHeight = 1024; FixWidth = 768; Resolution.CResolution ChangeRes768 = new Resolution.CResolution(FixHeight, FixWidth); break; case "1280*1024":How To Change The Screen Resolution in C# FixHeight = 1280; FixWidth = 1024; Resolution.CResolution ChangeRes1024 = new Resolution.CResolution(FixHeight, FixWidth); break; }

    Read the article

  • Replace Javascript click event with timed event?

    - by Rik
    Hi, I've found some javascript code that layers photos on top of each other when you click on them. Rather than having to click I'd like the function to automatically run every 5 seconds. How can I change this event to a timed one: $('a#nextImage, #image img').click(function(event){ Full code below. Thanks <script type="text/javascript"> $(document).ready(function(){ $('#description').css({'display':'block'}); $('#image img').hover(function() { $(this).addClass('hover'); }, function() { $(this).removeClass('hover'); }); $('a#nextImage, #image img').click(function(event){ event.preventDefault(); $('#description p:first-child').css({'visibility':'hidden'}); if($('#image img.current').next().length){ $('#image img.current').removeClass('current').next().fadeIn('normal').addClass('current').css({'position':'absolute'}); }else{ $('#image img').removeClass('current').css({'display':'none'}); $('#image img:first-child').fadeIn('normal').addClass('current').css({'position':'absolute'}); } if($('#image img.current').width()>=($('#page').width()-100)){ xPos=170; }else{ do{ xPos = 120 + (Math.floor(Math.random()*($('#page').width()-100))); }while(xPos+$('#image img.current').width()>$('#page').width()); } if($('#image img.current').height()>=300){ yPos=0; }else{ do{ yPos = Math.floor(Math.random()*300); }while(yPos+$('#image img.current').height()>300); } $('#image img.current').css({'left':xPos,'top':yPos}); }); });

    Read the article

  • Sitemaps - do I need to submit each sitemap in sitemap_index.xml to Google Webmaster tools?

    - by iSumitG
    I am having a Wordpress blog on my CentOS server. There is no sitemap.xml in the root directory but there is sitemap_index.xml file in the root directory which contains the following code: <?xml-stylesheet type="text/xsl" href="http://mywebsite.com/wp-content/plugins/wordpress-seo/css/xml-sitemap-xsl.php"?> <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc> http://mywebsite.com/post-sitemap.xml </loc> <lastmod> 2012-12-18T19:47:47+00:00 </lastmod> </sitemap> <sitemap> <loc> http://mywebsite.com/page-sitemap.xml </loc> <lastmod> 2012-12-18T17:32:49+00:00 </lastmod> </sitemap> </sitemapindex> My question: Which sitemap should I submit to Google Webmasters Tools? Options are: Only sitemap_index.xml Only post-sitemap.xml and page-sitemap.xml All 3 (sitemap_index.xml, post-sitemap.xml and page-sitemap.xml) Any other, please let me know.

    Read the article

  • Weird entry for robots.txt on a Naked Domain in Google Webmaster Tools

    - by Metalshark
    We own a .co.uk address and use an Internet hosting company that has made mistakes around DNS in the past. Our main site is hosted on www. and their reluctance to allow editing of AAAA records on-line means our naked domain does not resolve. Currently when we attempt to reach the naked version there is no entry for the browser to go to and it displays an unreachable page (nslookup just says Name: name of domain with no further entries such as an IP or Canonical Name). We recently added the relevant TXT records to verify us to view both the www. version and the naked version of the domain in Google Webmaster Tools (in anticipation of the requests to our Internet host coming to fruition). Imagine our shock when double checking the Site configuration Crawler access and finding a (admittedly failing) robots.txt with a dynamically generated HTML page (full of crude pop-up JavaScript) with references to 3 of our most prominent competitors. What could cause this to happen? As we are in the UK I am assuming some DNS server is serving Google bad information. We are going to contact the Internet hosting company to fix our A and AAAA records once and for all, then check that they work in the US (using something like OpenDNS). Should we be doing more though, for instance informing Google (through Webmaster Tools) that we are now aware there is something currently wrong with our naked domain? UPDATE: We have fixed our A records (not AAAA) and that has resolved the issue. But if there are further actions we should take for effectively having a parking page hosted on our active visitor-heavy, SEO-rich domain that advertised our competitors to US visitors, what would they be?

    Read the article

  • How do i make an AJAX block crawlable?

    - by Vikas Gulati
    I have a block with a few tabs. When the user clicks the tab the content of that block get loaded. Now I would like to make it crawlable by the search engines and at the same time I want to maintain the good user-experience. I figured out a couple of alternative but each one has its own shortcomings. The approached that i could come up with. Use hashbangs and then use this. But hashbangs are not good and things of past now. Secondly it will make my content crawlable by only googlebot as yahoo and bing dont support this. Use GET PARAMETERIZED fallback incase when javascript doesn't work. This will work for all bots and also would be nice as it would work without javascript. But then this will create duplicates of my page as this block is only a very small section of my page and i have like around 5-6 tabs. So it means that many duplicates! Doing this without AJAX is not an option as it would only increase the page load time as all these blocks have heavy media content in them!

    Read the article

  • Lancement du blog Oracle Applications France

    - by user816714
    Le voilà enfin ! Bienvenue sur notre nouveau blog Oracle Applications France. Pourquoi un blog ? Pour être plus proche de vous, chers utilisateurs ! Parce que nous savons qu’un directeur des systèmes d’information, un directeur des ressources humaines ou un chef d’entreprise ne cherchera pas les mêmes solutions, nous mettons en place ce support de communication pour engager un dialogue constructif et ouvert. Pour mener à bien cette mission, notre équipe marketing, sera derrière les commandes et vous proposera conseils, témoignages mais aussi des contenus multimédias en images et vidéos. En suivant le blog Oracle Applications France vous plongerez dans les coulisses des grandes actualités Oracle Applications. Nous tenterons de vous offrir un regard différent sur nos événements, vous serez les premiers à être informés sur nos lancements produits, et vous rencontrez nos experts au travers d’interviews et analyses exclusives. Notre mission ? Nous voulons devenir votre guide et vous accompagner pour mieux appréhender l’offre Oracle Applications. Parmi l’offre extrêmement dense des solutions qu’Oracle propose, nous vous aiderons à trouver plus rapidement votre chemin. N’hésitez donc pas à nous faire part de vos feedbacks et questions ainsi qu’à commenter nos futurs billets sur le blog Oracle Applications France. Nous vous conseillons également de suivre nos meilleurs experts sur les médias sociaux. En voici une première sélection : Vous voulez devenir incollable sur nos solutions CRM ? Suivez ce compte twitter : http://twitter.com/#!/OracleCRM Devenez fan de cette page facebook : http://www.facebook.com/OracleCRM Regardez nos vidéos youtube : http://www.youtube.com/OracleCRM Et pour les anglophones, rendez-vous sur le blog Oracle CRM en anglais : http://blogs.oracle.com/crm Vous ne jurez que par l’innovation produit aka la PLM ? Suivez ce compte twitter : http://twitter.com/#!/agileplm Devenez fan de cette page facebook : https://www.facebook.com/OracleAgilePLM Regardez nos vidéos youtube : http://www.youtube.com/OracleAgilePLM Et pour les anglophones, rendez-vous sur le blog Oracle PLM en anglais : http://blogs.oracle.com/plm/ Vous ne pouvez plus vivre sans la suite d’applications de gestion Oracle Fusion Applications ? Devenez fan de cette page facebook : https://www.facebook.com/OracleApps Ecoutez notre dernier podcast en anglais : http://streaming.oracle.com/ebn/podcasts/media/10118954_Fusions_Applications_061011.mp3 Et pour les anglophones, rendez-vous sur le blog Oracle Applications en anglais : http://blogs.oracle.com/applications/

    Read the article

  • Google Webmasters Tools strange 404 errors referred from same site

    - by Out of Control
    Starting about a month ago, I noticed a sudden increase in 404 errors in Webmasters Tools for one of my sites (over 1400 errors so far). All the errors are being referred from my own site to non existent pages. The 404 error URLs are all of the same format: URL: http://www.helloneighbour.com/save/1347208508000 The number on the end appears to be a timestamp followed by 3 zeros. The referring page, in this case is : Linked from http://www.helloneighbour.com/save/cmw-insurance-insurance-burnaby When I look at the source code of that page, or I use Webmaster tools to view the page as Google sees it, I can't find any link that comes close to what is above. I built the site, and I can't find any place that might be causing these false links either. The server logs (access and error) don't show Google or anyone else trying to access these links. I've marked all these pages as fixed, and waited a couple of weeks, only to find the errors come back again over the last few days. I'm wondering if anyone else has seen anything strange like this, or if someone might have a way for me to debug, replicate this error myself.

    Read the article

  • Suggestions for html tag info required for jQuery Plugin

    - by Toby Allen
    I have written a tiny bit of jQuery which simply selects all the Select form elements on the page and sets the selected property to the correct value. Previously I had to write code to generate the Select in php and specify the Selected attribute for the option that was selected or do loads of if statements in my php page or smarty template. Obviosly the information about what option is selected still needs to be specified somewhere in the page so the jQuery code can select it. I decided to create a new attribute on the Select item <Select name="MySelect" SelectedOption="2"> <-- Custom Attr SelectedOption <option value="1">My Option 1 </option> <option value="2">My Option 2 </option> <-- this option will be selected when jquery code runs <option value="3">My Option 3 </option> <option value="4">My Option 4 </option> </Select> Does anyone see a problem with using a custom attribute to do this, is there a more accepted way, or a better jquery way?

    Read the article

  • Fast Society Creates Mini and Mobile Temporary Social Networks

    - by ETC
    You’re out on the town or at a convention with a bunch of friends. How do you keep in touch with the entire group simultaneously? Fast Society offers a smartphone-based solution: a temporary social network for group talking, texting, and more. Fast Society was originally an iPhone only application and has recently updated to include and Android app too. The premise is simple: You set up a Fast Society group, link your friends into it, and for that night (or convention weekend) you’re all part of the same mini group. You can text the entire group, share pictures, set up sub-groups (let’s say that half your group is going to stay up late and party while half need to hit the rack to get up early for presentations, you can create a new group for the night owls to communicate), share your location, and send in-app and SMS messages to the entire group. Check out the video above to see it in action or hit up the link below to read more and grab a copy. Face Society [via Mashable] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks Page Zipper Unpacks Multi-Page Articles for Single-Page Display Minty Bug: Build an FM Bug Inside a Mint Container Get the MakeUseOf eBook Guide to Hacker Proofing Your PC Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client]

    Read the article

  • Average SPA weight [on hold]

    - by Emmanuel Istace
    First, sorry my noobs questions, but I'm mainly Windows Developer and not Web Developer :) I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA ? As this is the whole website, do you think it's acceptable ? Is lazy load (for images) a solution ? What will be impact for SEO ? Is the "200kb rule" of google still relevant ? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs ? Can a caching strategy be an answer ? Thank you in advance for you help ! Best regards

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Can prefixing a dash reduce the search engine rating?

    - by LeoMaheo
    Hi anyone! If I prefix a dash to GUIDs in my URLs on my Web site, in this manner: example.com/some/folders/-35x2ne5r579n32/page-name Will my SEO rating be affected? Background: On my site, people can look up pages by GUID, and by path. For example, both example.com/forum/-3v32nirn32/eat-animals-without-friends and example.com/forum/eat-animals-without-friends could map to the same page. To indicate that 3v32nirn32 is a GUID and not a page name, I thought I could prefix a - and then my webapp would understand. But I wouldn't want my search engine rating to drop. And prefixing a dash in this manner seems weird, so perhaps Googlebot lowers my rating. Hence my question: Do you know if my search engine rating might drop? (Today or in the future?) (I could also e.g. prefix id-, so the URL becomes example.com/forum/id-3v32nirn32, but then people cannot create pages that start with the word "id".) (I think I don't want URLs like this one: example.com/id/some-guid.) Kind regards, Magnus

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >