Search Results

Search found 31630 results on 1266 pages for 'content management'.

Page 22/1266 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • How to remove duplicate content, which is still indexed, but not linked to anymore?

    - by David
    A bug in the tool, which we use to create search-engine-friendly URLs changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake) After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David

    Read the article

  • ActionScript 3.0 Getting Size/Coordinates From Loader Content

    - by TheDarkIn1978
    i'm attempting to position a textfield to the bottom left of an image that is added to the display list from the Loader() class. i don't know how to access the width/height information of the image. var dragSprite:Sprite = new Sprite(); this.addChild(dragSprite); var imageLoader:Loader = new Loader(); imageLoader.load(new URLRequest("picture.jpg")); imageLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, displayPic, false, 0, true); function displayPic(evt:Event):void { dragSprite.addChild(evt.target.content); evt.target.removeEventListener(Event.COMPLETE, displayPic); } var tf:TextField = new TextField(); tf.text = "Picture Title"; tf.width = 200; tf.height = 14; tf.x //same x coordinate of dragSprite tf.y //same y coordinate of dragSprite, plus picture height, plus gap between picture and text addChild(tf); within the displayPic function, i could assign the evt.target.content.height and evt.target.content.width to variables that i could use to position the text field, but i assume there is a better way?

    Read the article

  • How to handle error with content-disposition

    - by František Žiacik
    Hi, how should I handle an exception that occurs after sending a Content-Disposition header for an attachment? I'm trying to generate a report at server and send it as a file, but if an exception occurs during the report generation, the error message itself is sent to browser which still takes it as a content of a file and shows a Save As dialog. User cannot know there was an error generating report, saves the file which is in wrong format now. Is there a way to cancel the response with this header and redirect to an error page? Or what else can I do to inform user about the error? Probably I could generate the report first and only if there was no error send the headers, but I want the report render directly to the Response output stream so that it does not need to stay in memory. Here is my code: this.Response.ContentType = "application/octet-stream"; this.Response.AddHeader("Content-Disposition", @"attachment; filename=""" + item.Name + @""""); this.Response.Flush(); GenerateReportTo(this.Response.OutputStream); // Exception occurs Thanks for any suggestions

    Read the article

  • How to decide on going into management?

    - by Rob Wells
    I read the transcript of a speech by Richard Hamming included as a part of this SO question and the speech had a quote that got me thinking about when someone should move into development. When your vision of what you want to do is what you can do single-handedly, then you should pursue it. The day your vision, what you think needs to be done, is bigger than what you can do single-handedly, then you have to move toward management. And the bigger the vision is, the farther in management you have to go. Any other suggestions as to how you can decide if you want to move away from the coal face and into management?

    Read the article

  • Set button content in a label (custom control)

    - by user1881207
    Instead of voting negative this question, answer to tell me what's wrong! I want to set the Button Content in a label in the custom control, because when I use it, the Content property is not visible and the button is empty (no text). <Button x:Class="WpfApplication1.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="35" d:DesignWidth="273" Content="Button"> <Button.Template> <ControlTemplate> <Grid> <Rectangle Name="rGridBack" StrokeThickness="1"> <Rectangle.Fill> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF4D4D4D" Offset="1" /> <GradientStop Color="#FF404040" Offset="0" /> </LinearGradientBrush> </Rectangle.Fill> <Rectangle.Stroke> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF4F4F4F" Offset="0" /> <GradientStop Color="#FF5B5B5B" Offset="1" /> </LinearGradientBrush> </Rectangle.Stroke> </Rectangle> <Rectangle Fill="#FF1E1E1E" Margin="1,1,1,1" Name="rThickness" /> <Rectangle Margin="2,2,2,2" Name="rGridTop" StrokeThickness="1"> <Rectangle.Fill> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF68686C" Offset="0" /> <GradientStop Color="#FF474747" Offset="1" /> </LinearGradientBrush> </Rectangle.Fill> <Rectangle.Stroke> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF7F7F7F" Offset="0" /> <GradientStop Color="#FF575757" Offset="1" /> </LinearGradientBrush> </Rectangle.Stroke> </Rectangle> <!--This label is where I want to set the Button.Content property--> <Label FontWeight="Normal" Foreground="White" HorizontalContentAlignment="Center" Name="tblckStep1Desc" Padding="0" VerticalContentAlignment="Center"> <Label.Effect> <DropShadowEffect BlurRadius="2" Color="Black" Direction="330" Opacity="0.7" ShadowDepth="1.5" /> </Label.Effect> </Label> </Grid> <ControlTemplate.Triggers> <Trigger Property="Button.Content" Value=""> <Setter Property="Content" TargetName="tblckStep1Desc"> </Setter> </Trigger> <Trigger Property="UIElement.IsMouseOver" Value="True"> <Setter Property="Shape.Fill" TargetName="rGridTop"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF838383" Offset="0" /> <GradientStop Color="#FF545454" Offset="1" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Shape.Stroke" TargetName="rGridTop"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF595959" Offset="1" /> <GradientStop Color="#FF929292" Offset="0" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Shape.Stroke" TargetName="rGridBack"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF414141" Offset="0" /> <GradientStop Color="#FF565656" Offset="1" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Shape.Fill" TargetName="rThickness"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF181818" Offset="1" /> <GradientStop Color="#FF181818" Offset="0" /> </LinearGradientBrush> </Setter.Value> </Setter> </Trigger> <Trigger Property="UIElement.IsEnabled" Value="False"> <Setter Property="Shape.Fill" TargetName="rGridTop"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF68686C" Offset="0" /> <GradientStop Color="#FF474747" Offset="1" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Shape.Stroke" TargetName="rGridTop"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF7F7F7F" Offset="0" /> <GradientStop Color="#FF575757" Offset="1" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Shape.Stroke" TargetName="rGridBack"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF4F4F4F" Offset="0" /> <GradientStop Color="#FF5B5B5B" Offset="1" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Shape.Fill" TargetName="rThickness"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF1E1E1E" Offset="1" /> <GradientStop Color="#FF1E1E1E" Offset="0" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Control.Foreground" TargetName="tblckStep1Desc"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF898989" Offset="1" /> <GradientStop Color="#FF898989" Offset="0" /> </LinearGradientBrush> </Setter.Value> </Setter> </Trigger> <Trigger Property="ButtonBase.IsPressed" Value="True"> <Setter Property="Shape.Fill" TargetName="rGridTop"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF313131" Offset="1" /> <GradientStop Color="#FF2E2E2E" Offset="0" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Shape.Stroke" TargetName="rGridTop"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF1F1F1F" Offset="1" /> <GradientStop Color="#FF1F1F1F" Offset="0" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Shape.Stroke" TargetName="rGridBack"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF414141" Offset="1" /> <GradientStop Color="#FF565656" Offset="0" /> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Shape.Fill" TargetName="rThickness"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF0C0C0C" Offset="1" /> <GradientStop Color="#FF0C0C0C" Offset="0" /> </LinearGradientBrush> </Setter.Value> </Setter> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Button.Template> The button is based on Adobe CS5 suite on dialog forms, because a lot of code.

    Read the article

  • Approaches for cross server content sharing?

    - by Anonymity
    I've currently been tasked with finding a best solution to serving up content on our new site from another one of our other sites. Several approaches suggested to me, that I've looked into include using SharePoint's Lists Web Service to grab the list through javascript - which results in XSS and is not an option. Another suggestion was to build a server side custom web service and use SharePoint Request Forms to get the information - this is something I've only very briefly looked at. It's been suggested that I try permitting the requesting site in the HTTP headers of the serving site since I have access to both. This ultimately resulted in a semi-working solution that had major security holes. (I had to include username/password in the request to appease AD Authentication). This was done by allowing Access-Control-Allow-Origin: * The most direct approach I could think of was to simply build in the webpart in our new environment to have the authors manually update this content the same as they would on the other site. Are any one of the suggestions here more valid than another? Which would be the best approach? Are there other suggestions I may be overlooking? I'm also not sure if WebCrawling or Content Scrapping really holds water here...

    Read the article

  • How can I create multiple mini-sites with similar/duplicate content without hurting my search engine rank?

    - by ekpyrotic
    Essential background: I run a small company that lets members of the public post handwritten letters to their local politician (UK-based). Every week a number of early stage bills (called Early Day Motions) are submitted for debate in the House of Commons, and supporters of the motion will contact their local Members of Parliament, asking them to sign the motion. The crux: I want to target these EDMs with customised mini-sites, so when people search "EDM xxx", they find my customised mini-site, specifically targeting that EDM (i.e., "Send a handwritten letter to your MP asking them to sign EDM xxx"). At the moment, all these mini-sites (and my homepage) have duplicate content with only the relevant EDM name, number, and background image changed. (For example, http://mailmymp.com and http://mailmymp.com/edm/teaching-life-saving-skills-at-school-edm-550.php). The question: Firstly, will this hurt my potential search engine ranking? And, if so, what's the best way to target these political campaigns in an efficient manner without hurting my SEO prospects?

    Read the article

  • What Is SEO Friendly Content?

    SEO friendly content is nothing but the content that is written keeping in mind the way in which search engines would view that content. SEO friendly content does not imply writing for search engines but taking their view of forming opinions into consideration.

    Read the article

  • Website Content is King - True?

    There is truth to the line - Content is King! It is content that converts. With poor content your traffic is wasted and sales are lost. Great content not only speaks to readers but also to the search engines - letting them know what searches you are relevant to.

    Read the article

  • http-equiv=content-language alternative - the way of specifying document language

    - by tugberk
    Lots of web sites uses following meta tag to specify the default language of the document: <meta http-equiv="content-language" content="es-ES"> When I go to w3c site: http://www.w3.org/TR/2011/WD-html-markup-20110113/meta.http-equiv.content-language.html#meta.http-equiv.content-language I get this: Using the meta element to specify the document-wide default language is obsolete. Consider specifying the language on the root element instead. What is the way of specifying document language now?

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • Google I/O 2012 - Get your Content on Google TV

    Google I/O 2012 - Get your Content on Google TV Christian Kurzke , Andrew Jeon, Mark Lindner Google TV devices are typically the largest screen in the house, which makes them a prime platform for developers who want to distribute high quality, long form content right to the living room. We will talk about different options for hosting, streaming and securing your content on Google TV, and how to ensure your audience has a great experience viewing your content. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1562 26 ratings Time: 01:01:00 More in Science & Technology

    Read the article

  • SharePoint 2010 Hosting :: How to Create an External Content Type SharePoint 2010

    - by mbridge
    In this simple Article trying to show how SharePoint Designer 2010 more the External Content Type to External Database are very easy to create and can be integrated with our SharePoint Portals. You can download SharePoint Designer 2010 here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d88a1505-849b-4587-b854-a7054ee28d66&displaylang=en For this Example I will create a Database in SQL Server and will use SharePoint Designer 2010 to create the connections and use as a mirror from our SharePoint Portal using List and the Database. The first thing we need to do, is connect to SQL Server and create our Database call “Contacts” and add the Table “Contact” with the following fields.  When we create the External Content Type. We  will need to associate the Content Type, in this case i am using the Generic List, then we can create the Connection to the external Data Source. After create the Connection to the Database we can define what Columns we will use and what operations we will add our custom List. For this example i select all Operation they came default. This operation are very important because the Business rules are defined in each operation. After we create the diferent operations we can create the Custom List and define the how will be the Operation and add the Name for our custom List.  If you try to access the New Custom List Call “Custom Contact” you will see we will not have access to the Business Data Connectivity. To Resolve this issue we will need to give Access and permissions to users to the Custom External Content Type BDC connection in the Central administration.  Access to Central Administration Page and select the option “Service Application Tab> Manage Service Application”. There you select the Service “Business Data Connectivity Service” then select “Manage”.  This Option will list all External Content Type, choose the External Content Type we create and select the option “Set Object Permission”, this option will allow to add users to the BDC and manage the permissions to the Custom List.  After the correct permissions are given we can Access to Data on our custom Contact List and start creating new Item and all the other options and operation we define to the same List.  Hope you like this litle Article about connect Database Content to SharePoint Portal using the Externa Content Types and BCS.Thank you.

    Read the article

  • <meta name="robots" content="noindex"> in "Fetch as Google"

    - by Rodrigo Azevedo
    I don't know why but when I execute "fetch as Google" it returns me HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html Content-Encoding: gzip Vary: Accept-Encoding Server: Microsoft-IIS/7.5 Set-Cookie: ASPSESSIONIDQACRADAQ=ECAINNFBMGNDEPAEBKBLOBOP; path=/ X-Powered-By: ASP.NET Date: Wed, 26 Jun 2013 15:18:29 GMT Content-Length: 153 <meta name="robots" content="noindex"> The noindex doesn't exist. Does anybody know what could be wrong?

    Read the article

  • Tips For Optimizing Web Content

    It is a well-known idea that 'content is king'. Yes, it is true in so many ways. It is but your website content elaborates the facts about your business. It is but your content plays an important role in persuading potential clients. Therefore, your content must be optimized - unique, error-free, and of good quality.

    Read the article

  • SEO Content Writing - A Flourishing Industry

    SEO Content writers are in huge demand these days and the reason for this is the increasing amount of sales that are generated through online sales. The need for original content that can be marketed to the customers will remain because such content not only helps to increase conversions but also help to attract customers through the various search engines. You might find that certain pages rank a lot better than other just due to the kind of content that is present on it.

    Read the article

  • How to prevent duplication of content on a page with too many filters?

    - by Vikas Gulati
    I have a webpage where a user can search for items based on around 6 filters. Currently I have the page implemented with one filter as the base filter (part of the url that would get indexed) and other filters in the form of hash urls (which won't get indexed). This way the duplication is less. Something like this example.com/filter1value-items#by-filter3-filter3value-filter2-filter2value Now as you may see, only one filter is within the reach of the search engine while the rest are hashed. This way I could have 6 pages. Now the problem is I expect users to use two filters as well at times while searching. As per my analysis using the Google Keyword Analyzer there are a fare bit of users that might use two filters in conjunction while searching. So how should I go about it? Having all the filters as part of the url would simply explode the number of pages and sticking to the current way wouldn't let me target those users. I was thinking of going with at max 2 base filters and rest as part of the hash url. But the only thing stopping me is that it would lead to duplication of content as per Google Webmaster Tool's suggestions on Url Structure.

    Read the article

  • Content light website and Google - Tell google it's a listings site (as opposed shop, reviews or restaurants)

    - by Doug Firr
    I have a listings style website. Due to the nature of this (listings) the site is content light. Each page is typically less that 50 words but there are many pages. The site in question has had a ton of media coverage and so has some great inbound links from places like Wired, Fast Company, Canada Broadcasting Corporation and many many other bloggers, media websites and recycle related niche authors (It's a recycling site). But Google really ignores it. Traffic from search is very very low - less than 5% of all traffic. I know that using markup you can tell Google whether your site is a restaurant, article, review, shop, local business and a few other categories (https://www.google.com/webmasters/markup-helper/u/0/). Is there a way to tell Google that my site is a listings site? I suspect, but do not know for sure, that part of the problem is that Google simply does not know what my site is? It's a crowdmap where people post curbalerts. The information is useful to people but it is presented in a short, concise way - a pin on a map, a picture and a short description. Adding anything further is not necessary for the site's intended purpose. 1st question - how best to tell the search engines what y site is - listings and not some spammy website? Any recommendations in improving our site's Search presence? You can take a look here if interested: http://tinyurl.com/lxg4hn7

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Tokyo Tyrant ulog / update log management.

    - by Nathan Milford
    I'm testing Tokyo Tyrant in a master-master setup and have found the ulog grows out of control and locks up the disk. At first I found the -ulim option useful and limited the logfile size, however it simply rolls over to a new log, leaving the old ones to clutter up the partition. I suppose I'll write a shell script that will delete ulogs older than X, once I find out how far back Tokyo Tyrant needs in the update log in order to failover. Does anyone have any experience with this Tokyo Tyrant? Do you have a feel (acknowledging that every install is different based on what is being stored) for the optimal ulog size vs how far back a Tokyo Tyrant instance needs to look in the ulog to assume master status? Thanks, nathan

    Read the article

  • Content Management System Where the Content needs to reside

    - by lee
    Hi, We are developing a Online Ordering Website and we are planning to be completely flexible in the sense Layout can be changed, Color, Font , and in a page a component(div) can be added. In this case whether we need to store all our View Code for example <div id="MenuContent"> <div> <h4> Your Order</h4> <hr /> <%if (Model != null) { %> <table width="100%"> <thead> <tr> <th> Item </th> <th> Quantity </th> <th> Price </th> <th> Remove </th> <th> Total </th> </tr> </thead> ...... in the database or we can just store the div id and based on that load a view which is avalaible in the file system. Any Pointers is greatly appreciated.

    Read the article

  • SNMP based network discovery (switches), device (ports on switches) power management

    - by SaM
    In a enterprise network, what would be the right way to generate a list of switches (SNMP managed) Is it reasonable to ask the organization to supply a list such as this: Switch name IP Address of switch Location SNMP community strings Or are there standard ways to run discovery scans - UDP broadcasts? After having generated a repository such as the above; given a single switch, how to query it for the list of all devices attached to it? Finally, how to selectively power down/power up ports? (remotely - using SNMP) Platform is going to be .NET based (C#) and the library being used is SharpSNMP

    Read the article

  • Network Management Cable Labeling Techniques and their alternatives [closed]

    - by Alex
    Possible Duplicate: What is the most effective solution you used to label cables? Yes i know there are a lot of howtos and already answered questions about this topic, like this one: How do you organise the cables in your racks? Currently i am searching the web for different techniques (alternatives) for labeling the cables at the server racks and/or data centers. Unfortunately i do not have any experience with labeling/documentation of network cables in a large scale. As far as I could lookup by now the current labeling techniques are coloring and a self defined print-labeling technique (numbering, text) maybe also according to a standard which are usually used. I want to know if QR, RFID (ok RFID in a data center would be stupid due to the radio frequency wouldn't it be?), Barcodes or similar (??) have already been used by some administrators or why they did not consider such techniques at all? Too complicated (with QR scanner etc..) if you are in front of the cables and want to get quick feedback for what the cable is? What alternatives are out there? Advantages/Disadvantages? Best-Practice? I would appreciate any help on this topic, thank you! Regards, Alex

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >