Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 61/1180 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Unsafe, super-fast cross-process memory buffer?

    - by John
    Cross-process memory buffers always have some overhead, and my understanding is this is quite high. But what if you're implementing a cross-process render-buffer, this isn't critically important in the same way as other data so are there techniques we can use to get 'raw' access to a chunk of memory from multiple processes, with no safety nets apart from it not crashing? Or do modern operating systems simply not work with unabstracted memory in a way to make this possible? I'm working in C++ but the question applies to Win XP/Vista/7, MacOSX 10.5+ (& Linux less importantly).

    Read the article

  • Problems with viewing site in Internet Exploder

    - by Kevin
    I built a site and I'm just about finished. It displays properly in all the browsers I have (Safari, Chrome, and Firefox) but my client is not computer savvy at all and still uses Internet Explorer, so that's all he's using to view the site. I don't have IE to test the site so I've been using BrowserStack.com and I see in IE that the site is broken. The navigation bar has a white background and is pushed down a line, and the logo isn't appearing. Could anybody please assist me with figuring out why the site isn't displaying properly in IE, and how to fix it? Help is greatly appreciated. Thanks Site: WebuildCAhomes dot com

    Read the article

  • Include weather information in ASP.Net site from weather.com services

    - by sreejukg
    In this article, I am going to demonstrate how you can use the XMLOAP services (referred as XOAP from here onwards) provided by weather.com to display the weather information in your website. The XOAP services are available to be used for free of charge, provided you are comply with requirements from weather.com. I am writing this article from a technical point of view. If you are planning to use weather.com XOAP services in your application, please refer to the terms and conditions from weather.com website. In order to start using the XOAP services, you need to sign up the XOAP datafeed. The signing process is simple, you simply browse the url http://www.weather.com/services/xmloap.html. The URL looks similar to the following. Click on the sign up button, you will reach the registration page. Here you need to specify the site name you need to use this feed for. The form looks similar to the following. Once you fill all the mandatory information, click on save and continue button. That’s it. The registration is over. You will receive an email that contains your partner id, license key and SDK. The SDK available in a zipped format, contains the terms of use and documentation about the services available. Other than this the SDK includes the logos and icons required to display the weather information. As per the SDK, currently there are 2 types of information available through XOAP. These services are Current Conditions for over 30,000 U.S. and over 7,900 international Location IDs Updated at least Hourly Five-Day Forecast (today + 4 additional forecast days in consecutive order beginning with tomorrow) for over 30,000 U.S. and over 7,900 international Location IDs Updated at least Three Times Daily The SDK provides detailed information about the fields included in response of each service. Additionally there is a refresh rate that you need to comply with. As per the SDK, the refresh rate means the following “Refresh Rate” shall mean the maximum frequency with which you may call the XML Feed for a given LocID requesting a data set for that LocID. During the time period in between refresh periods the data must be cached by you either in the memory on your servers or in Your Desktop Application. About the Services Weather.com will provide you with access to the XML Feed over the Internet through the hostname xoap.weather.com. The weather data from the XML feed must be requested for a specific location. So you need a location ID (LOC ID). The XML feed work with 2 types of location IDs. First one is with City Identifiers and second one is using 5 Digit US postal codes. If you do not know your location ID, don’t worry, there is a location id search service available for you to retrieve the location id from city name. Since I am a resident in the Kingdom of Bahrain, I am going to retrieve the weather information for Manama(the capital of Bahrain) . In order to get the location ID for Manama, type the following URL in your address bar. http://xoap.weather.com/search/search?where=manama I got the following XML output. <?xml version="1.0" encoding="UTF-8"?> <!-- This document is intended only for use by authorized licensees of The –> <!-- Weather Channel. Unauthorized use is prohibited. Copyright 1995-2011, –> <!-- The Weather Channel Interactive, Inc. All Rights Reserved. –> <search ver="3.0">       <loc id="BAXX0001" type="1">Al Manama, Bahrain</loc> </search> You can try this with any city name, if the city is available, it will return the location id, and otherwise, it will return nothing. In order to get the weather information, from XOAP,  you need to pass certain parameters to the XOAP service. A brief about the parameters are as follows. Please refer SDK for more details. Parameter name Possible Value cc Optional, if you include this, the current condition will be returned. Value can be anything, as it will be ignored e.g. cc=* dayf If you want the forecast for 5 days, specify dayf=5 This is optional iink Value should be XOAP par Your partner id. You can find this in your registration email from weather.com prod Value should be XOAP key The license key assigned to you. This will be available in the registration email unit s or m (standard or matric or you can think of Celsius/Fahrenheit) this is optional field, if not specified the unit will be standard(s) The URL host for the XOAP service is http://xoap.weather.com. So for my purpose, I need the following request to be made to access the XOAP services. http://xoap.weather.com/weather/local/BAXX0001?cc=*&link=xoap&prod=xoap&par=*********&key=************** (The ***** to be replaced with the corresponding alternatives) The response XML have a root element “weather”. Under the root element, it has the following sections <head> - the meta data information about the weather results returned. <loc> - the location data block that provides, the information about the location for which the wheather data is retrieved. <lnks> - the 4 promotional links you need to place along with the weather display. Additional to these 4 links, there should be another link with weather channel logo to the home page of weather.com. <cc> - the current condition data. This element will be there only if you specify the cc element in the request. <dayf> - the forcast data as you specified. This element will be there only if you specify the dayf in the request. In this walkthrough, I am going to capture the weather information for Manama (Location ID: BAXX0001). You need 2 applications to display weather information in your website. A Console application that retrieves data from the XMLOAP and store in the SQL Server database (or any data store as you prefer).This application will be scheduled to execute in every 25 minutes using windows task scheduler, so that we can comply with the refresh rate. A web application that display data from the SQL Server database Retrieve the Weather from XOAP I have created a console application named, Weather Service. I created a SQL server database, with the following columns. I named the table as tblweather. You are free to choose any name. Column name Description lastUpdated Datetime, this is the last time when the weather data is updated. This is the time of the service running TemparatureDateTime The date and time returned by XML feed Temparature The temperature returned by the XML feed. TemparatureUnit The unit of the temperature returned by the XML feed iconId The id of the icon to be used. Currently 48 icons from 0 to 47 are available. WeatherDescription The Weather Description Phrase returned by the feed. Link1url The url to the first promo link Link1Text The text for the first promo link Link2url The url to the second promo link Link2Text The text for the second promo link Link3url The url to the third promo link Link3Text The text for the third promo link Link4url The url to the fourth promo link Link4Text The text for the fourth promo link Every time when the service runs, the application will update the database columns from the XOAP data feed. When the application starts, It is going to get the data as XML from the url. This demonstration uses LINQ to extract the necessary data from the fetched XML. The following are the code segment for extracting data from the weather XML using LINQ. // first, create an instance of the XDocument class with the XOAP URL. replace **** with the corresponding values. XDocument weather = XDocument.Load("http://xoap.weather.com/weather/local/BAXX0001?cc=*&link=xoap&prod=xoap&par=***********&key=c*********"); // construct a query using LINQ var feedResult = from item in weather.Descendants() select new { unit = item.Element("head").Element("ut").Value, temp = item.Element("cc").Element("tmp").Value, tempDate = item.Element("cc").Element("lsup").Value, iconId = item.Element("cc").Element("icon").Value, description = item.Element("cc").Element("t").Value, links = from link in item.Elements("lnks").Elements("link") select new { url = link.Element("l").Value, text = link.Element("t").Value } }; // Load the root node to a variable, you may use foreach construct instead. var item1 = feedResult.First(); *If you want to learn more about LINQ and XML, read this nice blog from Scott GU. http://weblogs.asp.net/scottgu/archive/2007/08/07/using-linq-to-xml-and-how-to-build-a-custom-rss-feed-reader-with-it.aspx Now you have all the required values in item1. For e.g. if you want to get the temperature, use item1.temp; Now I just need to execute an SQL query against the database. See the connection part. using (SqlConnection conn = new SqlConnection(@"Data Source=sreeju\sqlexpress;Initial Catalog=Sample;Integrated Security=True")) { string strSql = @"update tblweather set lastupdated=getdate(), temparatureDateTime = @temparatureDateTime, temparature=@temparature, temparatureUnit=@temparatureUnit, iconId = @iconId, description=@description, link1url=@link1url, link1text=@link1text, link2url=@link2url, link2text=@link2text,link3url=@link3url, link3text=@link3text,link4url=@link4url, link4text=@link4text"; SqlCommand comm = new SqlCommand(strSql, conn); comm.Parameters.AddWithValue("temparatureDateTime", item1.tempDate); comm.Parameters.AddWithValue("temparature", item1.temp); comm.Parameters.AddWithValue("temparatureUnit", item1.unit); comm.Parameters.AddWithValue("description", item1.description); comm.Parameters.AddWithValue("iconId", item1.iconId); var lstLinks = item1.links; comm.Parameters.AddWithValue("link1url", lstLinks.ElementAt(0).url); comm.Parameters.AddWithValue("link1text", lstLinks.ElementAt(0).text); comm.Parameters.AddWithValue("link2url", lstLinks.ElementAt(1).url); comm.Parameters.AddWithValue("link2text", lstLinks.ElementAt(1).text); comm.Parameters.AddWithValue("link3url", lstLinks.ElementAt(2).url); comm.Parameters.AddWithValue("link3text", lstLinks.ElementAt(2).text); comm.Parameters.AddWithValue("link4url", lstLinks.ElementAt(3).url); comm.Parameters.AddWithValue("link4text", lstLinks.ElementAt(3).text); conn.Open(); comm.ExecuteNonQuery(); conn.Close(); Console.WriteLine("database updated"); } Now click ctrl + f5 to run the service. I got the following output Check your database and make sure, the data is updated with the latest information from the service. (Make sure you inserted one row in the database by entering some values before executing the service. Otherwise you need to modify your application code to count the rows and conditionally perform insert/update query) Display the Weather information in ASP.Net page Now you got all the data in the database. You just need to create a web application and display the data from the database. I created a new ASP.Net web application with a default.aspx page. In order to comply with the terms of weather.com, You need to use Weather.com logo along with the weather display. You can find the necessary logos to use under the folder “logos” in the SDK. Additionally copy any of the icon set from the folder “icons” to your web application. I used the 93x93 icon set. You are free to use any other sizes available. The design view of the page in VS2010 looks similar to the following. The page contains a heading, an image control (for displaying the weather icon), 2 label controls (for displaying temperature and weather description), 4 hyperlinks (for displaying the 4 promo links returned by the XOAP service) and weather.com logo with hyperlink to the weather.com home page. I am going to write code that will update the values of these controls from the values stored in the database by the service application as mentioned in the previous step. Go to the code behind file for the webpage, enter the following code under Page_Load event handler. using (SqlConnection conn = new SqlConnection(@"Data Source=sreeju\sqlexpress;Initial Catalog=Sample;Integrated Security=True")) { SqlCommand comm = new SqlCommand("select top 1 * from tblweather", conn); conn.Open(); SqlDataReader reader = comm.ExecuteReader(); if (reader.Read()) { lblTemparature.Text = reader["temparature"].ToString() + "&deg;" + reader["temparatureUnit"].ToString(); lblWeatherDescription.Text = reader["description"].ToString(); imgWeather.ImageUrl = "icons/" + reader["iconId"].ToString() + ".png"; lnk1.Text = reader["link1text"].ToString(); lnk1.NavigateUrl = reader["link1url"].ToString(); lnk2.Text = reader["link2text"].ToString(); lnk2.NavigateUrl = reader["link2url"].ToString(); lnk3.Text = reader["link3text"].ToString(); lnk3.NavigateUrl = reader["link3url"].ToString(); lnk4.Text = reader["link4text"].ToString(); lnk4.NavigateUrl = reader["link4url"].ToString(); } conn.Close(); } Press ctrl + f5 to run the page. You will see the following output. That’s it. You need to configure the console application to run every 25 minutes so that the database is updated. Also you can fetch the forecast information and store those in the database, and then retrieve it later in your web page. Since the data resides in your database, you have the full control over your display. You need to make sure your website comply with weather.com license requirements. If you want to get the source code of this walkthrough through the application, post your email address below. Hope you enjoy the reading.

    Read the article

  • What are modern and old compilers written in?

    - by ulum
    As a compiler, other than an interpreter, only needs to translate the input and not run it the performance of itself should be not that problematic as with an interpreter. Therefore, you wouldn't write an interpreter in, let's say Ruby or PHP because it would be far too slow. However, what about compilers? If you would write a compiler in a scripting language maybe even featuring rapid development you could possibly cut the source code and initial development time by halv, at least I think so. To be sure: With scripting language I mean interpreted languages having typical features that make programming faster, easier and more enjoyable for the programmer, usually at least. Examples: PHP, Ruby, Python, maybe JavaScript though that may be an odd choice for a compiler What are compilers normally written in? As I suppose you will respond with something low-level like C, C++ or even Assembler, why? Are there compilers written in scripting languages? What are the (dis)advantages of using low or high level programming languages for compiler writing?

    Read the article

  • How to make an application scriptable in Linux

    - by arx
    I've written an application in C++ that takes a complex binary file format and translates it into human-readable text. Having edited the text you can recompile it back into the binary file format. This would be more useful if the application's internal object model was scriptable. On Windows I'd expose the objects using COM or .Net but I want this to work on Linux. I could embed a scripting language but that's a fair bit of work, and limits users to the scripting language I choose. Ideally, I'm looking for some way of exposing a scriptable DOM from my application that is: Widely support in scripting languages (without writing language-specific wrappers) Cross-platform (but Linux support is most important) In-process (but this isn't essential)

    Read the article

  • Bash Shell Scripting Errors: ./myDemo: 56: Syntax error: Unterminated quoted string [EDITED]

    - by ???
    Could someone take a look at this code and find out what's wrong with it? #!/bin/sh while : do echo " Select one of the following options:" echo " d or D) Display today's date and time" echo " l or L) List the contents of the present working directory" echo " w or W) See who is logged in" echo " p or P) Print the present working directory" echo " a or A) List the contents of a specified directory" echo " b or B) Create a backup copy of an ordinary file" echo " q or Q) Quit this program" echo " Enter your option and hit <Enter>: \c" read option case "$option" in d|D) date ;; l|L) ls $PWD ;; w|w) who ;; p|P) pwd ;; a|A) echo "Please specify the directory and hit <Enter>: \c" read directory if [ "$directory = "q" -o "Q" ] then exit 0 fi while [ ! -d "$directory" ] do echo "Usage: "$directory" must be a directory." echo "Re-enter the directory and hit <Enter>: \c" read directory if [ "$directory" = "q" -o "Q" ] then exit 0 fi done printf ls "$directory" ;; b|B) echo "Please specify the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 fi while [ ! -f "$file" ] do echo "Usage: \"$file\" must be an ordinary file." echo "Re-enter the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 fi done cp "$file" "$file.bkup" ;; q|Q) exit 0 ;; esac echo done exit 0 There are some syntax errors that I can't figure out. However I should note that on this unix system echo -e doesn't work (don't ask me why I don't know and I don't have any sort of permissions to change it and even if I wouldn't be allowed to) Bash Shell Scripting Error: "./myDemo ./myDemo: line 62: syntax error near unexpected token done' ./myDemo: line 62: " [Edited] EDIT: I fixed the while statement error, however now when I run the script some things still aren't working correctly. It seems that in the b|B) switch statement cp $file $file.bkup doesn't actually copy the file to file.bkup ? In the a|A) switch statement ls "$directory" doesn't print the directory listing for the user to see ? #!/bin/bash while $TRUE do echo " Select one of the following options:" echo " d or D) Display today's date and time" echo " l or L) List the contents of the present working directory" echo " w or W) See who is logged in" echo " p or P) Print the present working directory" echo " a or A) List the contents of a specified directory" echo " b or B) Create a backup copy of an ordinary file" echo " q or Q) Quit this program" echo " Enter your option and hit <Enter>: \c" read option case "$option" in d|D) date ;; l|L) ls pwd ;; w|w) who ;; p|P) pwd ;; a|A) echo "Please specify the directory and hit <Enter>: \c" read directory if [ ! -d "$directory" ] then while [ ! -d "$directory" ] do echo "Usage: "$directory" must be a directory." echo "Specify the directory and hit <Enter>: \c" read directory if [ "$directory" = "q" -o "Q" ] then exit 0 elif [ -d "$directory" ] then ls "$directory" else continue fi done fi ;; b|B) echo "Specify the ordinary file for backup and hit <Enter>: \c" read file if [ ! -f "$file" ] then while [ ! -f "$file" ] do echo "Usage: "$file" must be an ordinary file." echo "Specify the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 elif [ -f "$file" ] then cp $file $file.bkup fi done fi ;; q|Q) exit 0 ;; esac echo done exit 0 Another thing... is there an editor that I can use to auto-parse code? I.e something similar to NetBeans?

    Read the article

  • Alternative to web of trust

    - by user23950
    Are there any alternatives to web of trust for chrome and firefox. Because I found out that Wot doesn't always ask you if you want to access a dangerous site or not. While I was browsing a while ago for a curriculum vitae template. I saw this image on google that looks like one. I click it but then it brought me to a site with a red mark in Wot, and wot doesn't even bother to inform me first that the site is dangerous. Do you know of any alternatives?

    Read the article

  • Redirecting to a diferent exe for download based on user agent

    - by Ra
    I own a Linux-Apache site where I host exe files for download. Now, when a user clicks this link to my site (published on another site): http://mysite.com/downloads/file.exe I need to dynamically check their user agent and redirect them to either http://mysite.com/downloads/file-1.exe or http://mysite.com/downloads/file-2.exe It seems to me that I have to options: Put a .htaccess file stating that .exe files should be considered to be scripts. Then write a script that checks the user agent and redirects to a real exe placed in another folder. Call this script file.exe. Use Apache mod-rewrite to point file.exe to redirect.php. Which of these is better? Any other considerations? Thanks.

    Read the article

  • VMware vSphere cluster design for site redundancy

    - by Stefan Radovanovici
    I have a question about the best design for site redudancy when using vSphere clusters. A bit of background info about our situation first though. We are a medium-sized company with two main offices, located in different countries. Our networks are linked by a Layer2 150Mbps leased line which is currently underused. We have a variety of services running for internal use within the company, some on physycal servers and some on existing vSphere clusters. In our department we also run several services (almost all running under various forms of Linux) like NTP, Syslog, jump servers, monitoring servers and so on. We have now the requirement that those servers need to be redundant within each location (which they are not at the moment) and also site redudant (which they are to some extent, the servers are duplicated in the 2nd location with configurations kept in sync via various methods at the application layer). There is no SAN available for us, at least not something that we can use at the moment. Cost is also an issue. While we do have some budget available for this, we can't afford to buy SANs for both locations for example. I looked at the VSA feature and it seems that this could be something for us but I am unsure how to solve the site-redudancy requirement. At the moment for testing purposes I am setting up in a lab a vSphere 5 with VSA on two ESXi hosts. I am currently using the Essentials Plus kit with VSA license, which allows me to build a VSA cluster on up to 3 hosts, together with a vCenter license to manage them. The hosts each have two dual-port network cards and two 600GB drives, running in Raid1. Hardware-wise this will be enough for us to run the all the services we need as VMs and will provide redundandcy within the site. At the moment I see only two option to have site redundancy: build an identical VSA cluter in the second location and keep the various services sync'ed at application layer (database sync, rsync and so on). simply move one of the hosts from the existing cluster to the second location, basically having the VSA cluster span the 150Mbps link between the sites. I would very much prefer the second option but I am unsure how well it'll work, if it can work at all. Technically it should, we can span the needed VLANs across the leased line and have them available in the second location. The advantage would be that we don't need to worry at all about sync'ing databases and the like. But I have the feeling that the bandwidth will not be enough, I have no way of knowing how much traffic will the VSA cluster generate between the hosts. I realize that this will most likely depend on the individual usage of the VMs but still, I have no idea how VSA replicates data between the ESXi hosts. Are these my only options or can my goals be achieved in some other way ? Is there perhaps a way to have some sort of "cold stand by" cluster in the second location where the VMs would be sync'ed once per night from the main location ? The idea is that in case the first site becomes unavailable, we would be able to bring all those VMs online there. We would be ok with the data being 1 day old. Any answers are appreciated. Best regards, Stefan

    Read the article

  • running asp.net 3.5 and asp.net 2.0 in same site

    - by cori
    We're running ASP.Net 2.0 on our corporate web site, and I'd like to get it up to ASP.Net 3.5 as smoothly as possible. The project/solution architecture in VS 2005 is an ASP.Net 2.0 web project and an .Net 2.0 data access layer project which is used by the site code. Upon opening the projects in a new VS 2008 solution they seemed to be converted to .Net 3.5 with a minimum of fuss - they built correctly out of the box, deployed successfully, and seem to work just fine, which is exactly as I would expect given that .Net 2.0 and 3.5 share a common runtime. The major difference after the conversion is that the web.config file's referenced dlls are now the 3.5 versions. What I would like to do is to update the site piecemeal; as I make modifications to a given page send the 3.5 verson of that page over to our webserver and not update the whole site at once. In testing on our dev box this approach seems to be working fine - the site code is interacting with the .Net 3.5 data access layer without difficulty, a handful of pages are running 3.5 page-behind code (by this I mean that they're running assemblies built in VS 2008 - the site is using single-page assemblies for code behind), the 3.5 web.config is in place, and the bulk of the site is running code-behind assemblies built in VS2005. Everything looks great. Which makes me worried that I'm missing something. Is this architecture workable, or is there a problem lying is wait for m that I haven't considered?

    Read the article

  • Cannot access a very specific site from my router

    - by DJDarkViper
    This is a problem for me because this site is important to me. It's MY website. And sadly my email is hosted on my site (which I cant access either) When I try to access my website when connected to my Linksys E3000 router, these days it simply just doesn't go through. When I ping it, its all Request Timed Out, and when I tracert C:\Users\Kyle>tracert blackjaguarstudios.com Tracing route to blackjaguarstudios.com [199.188.204.228] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms CISCO26565 [192.168.1.1] 2 16 ms 15 ms 11 ms 11.4.64.1 3 11 ms 9 ms 11 ms rd1cs-ge1-2-1.ok.shawcable.net [64.59.169.2] 4 20 ms 21 ms 22 ms 66.163.76.98 5 37 ms 36 ms 35 ms rc1nr-tge0-9-2-0.wp.shawcable.net [66.163.77.54] 6 112 ms 84 ms 85 ms rc2ch-pos9-0.il.shawcable.net [66.163.76.174] 7 86 ms 89 ms 90 ms rc4as-ge12-0-0.vx.shawcable.net [66.163.64.46] 8 90 ms 84 ms 85 ms eqix.xe-3-3-0.cr2.iad1.us.nlayer.net [206.223.115.61] 9 97 ms 97 ms 99 ms xe-3-3-0.cr1.atl1.us.nlayer.net [69.22.142.105] 10 128 ms 128 ms 126 ms ae1-40g.ar1.atl1.us.nlayer.net [69.31.135.130] 11 101 ms 97 ms 96 ms as16626.xe-2-0-5-102.ar1.atl1.us.nlayer.net [69.31.135.46] 12 100 ms 97 ms 197 ms 6509-sc1.abstractdns.com [207.210.114.166] 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 * * * Request timed out. 17 * * * Request timed out. 18 * * * Request timed out. 19 * * * Request timed out. 20 * * * Request timed out. 21 * * * Request timed out. 22 * * * Request timed out. 23 * * * Request timed out. 24 * * * Request timed out. 25 * * * Request timed out. 26 * * * Request timed out. 27 * * * Request timed out. 28 * * * Request timed out. 29 * * * Request timed out. 30 * * * Request timed out. Trace complete. C:\Users\Kyle> SHAW Cable being my ISP. Figuring this was all something to do with some setting I made on the router, I reset the thing back to factory defaults. Nope. So I'm at a bit of a loss what to do here, as NO device (Computers, Laptops, Tablets, Phones, PS3/ 360, etc) can access my site or its features, so it's not just my computer either. But every other site is just fine. When I connect to my neighbors router, the site comes up just fine. And shes with SHAW as well. What should I do?!

    Read the article

  • How do I sync a list on a site from a list on a subsite in SharePoint?

    - by mandroid
    Our group has a main site and a subsite dedicated to our projects. Each project has a project lead and a completion % associated with it. We have a list on the project site that goes to each project's dedicated subsite within the project site. I want to duplicate the list from the project site on the main site, where any changes or additions made to the project site list appears in the main site list as well including the project lead and completion %. How do I accomplish this? Here is a simple example of the hierarchy: Main site - Project list derived from Project list below Project Site - Project list Project 1 Site Project 2 Site Project 3 Site

    Read the article

  • Handling site not found and page not found with dynamic mass virtual hosting

    - by Rick Moynihan
    I have recently setup mass virtual hosting in Apache so that all we need to do is create a directory to create a new vhost. We're then also using wildcard DNS to map all subdomains to the server running our Apache instance. This works excellently, however I'm now having trouble configuring it to fail-over to an appropriate default/error-page when the vhost directory does not exist. The problem appears to be conflated between by my desire to handle the two error conditions: vhost not found i.e. there was no directory found matching the host supplied in the HTTP host header. I'd like this to display an appropriate site not found error page. The 404 page not found condition of the vhost. Additionally I have a specialised "api" vhost in its own vhost block. I've tried a number of variations and none seem to exhibit the behaviour I want. Here's what I'm working with right now: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/site-not-found ServerName sitenotfound.mydomain.org ErrorDocument 500 /500.html ErrorDocument 404 /500.html </VirtualHost> <VirtualHost *:80> ServerName api.mydomain.org DocumentRoot /var/www/vhosts/api.mydomain.org/current # other directives, e.g. setting up passenger/rails etc... </VirtualHost> <VirtualHost *:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/vhosts/%0/current # other directives ... e.g proxy passing to api etc... ErrorDocument 404 /404.html </VirtualHost> My understanding is that the first vhost block is used as the default, so I have this here as my catch all site. Next I have my API vhost, and then finally my mass vhost block. So for a domain that doesn't match the first two ServerName's and has no corresponding directory in /var/www/vhosts/ I'd expect it to fall-over to the first vhost, however with this setup, all domains resolve to my default site-not-found. Why is this? By putting the mass-vhost block first, I can get the mass-vhosts to resolve properly, but not my site-not-found vhost... and in this case I can't seem to find a way to distinguish between a page-level 404 in the vhost, and the case where the VirtualDocumentRoot fails to find a vhost directory (this appears to use the 404 also). Any help out of this bind is much appreciated!

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Google shows subdomain of main site instead of add on domain URL

    - by Welsher
    I have my host (lunarpages) set up with a few add on domains to my main account. These show up as sub-domains of my main account, but they can be reached by using the new domain I've created. So: subdomain1.domain.com -- www.mynewsite.com subdomain2.domain.com -- www.myothersite.com etc. The problem is, mynewsite.com shows up in google with that domain, but myothersite.com shows up with subdomain2.domain.com. I don't have a clue what might be causing this to happen. If anyone has an advice or can point me in the right direction, I'd really appreciate it! Thanks.

    Read the article

  • Problems with cross forest authentication in SQL Reporting

    - by chunkyb2002
    We're currently running an SQL 2008 R2 Cluster with Reporting Services running, all for use with System Center Operations Manager 2007 R2 (RU3). Our users are on a different domains to the SCOM and SQL servers (we have two domains as we are in the process of a domain migration) We have no problems at all with users accessing reports via the SCOM Console or the Web interface if they are on the new domain which runs at 2008 R2 functional level. However users on the old domain (which runs at a 2003 functional level) cannot access reports on SCOM or via the web interface (http://sqlserver/reports) The error we get is: An error occurred when invoking the authorization extension. (rsAuthorizationExtensionError) For more information about this error navigate to the report server on the local server machine, or enable remote errors Taking the errors advise we logged on to the SQL server as a user on the old domain (which works fine!) and then try to authenticate with the reporting via the web interface which produces this most useful of errors: An error occurred when invoking the authorization extension. (rsAuthorizationExtensionError) The creator of this fault did not specify a Reason. Things we've tried: Recreating the trust between domains Ensuring the SQL Reporting service account was a member of Windows Authorization Access Group on the 2003 domain Added users on the 2003 domain explicitly to the Reporting Users group on the SQL Server Has anyone come across this issue before perhaps in a different scenario? If so how was it resolved? Thanks in advance for any help.

    Read the article

  • Creating site with wix.com or weebly.com [on hold]

    - by Edgar
    I decided to create web page and for that purpose I find out that I can use wix.com portal. My knowledge of HTML,CSS is on basic level. So I want to ask what are pros and cons of creating webPages using WIX to compare with making your on your own (writing code by yourself). One of the questions is: can I put custom advertisements to the my page. Also would appreciate of suggestions what portal is better for wix.com or weebly.com or for good website I should choose codding by myself? Finally, would be nice to get any suggestions in this field.

    Read the article

  • Move exchange mailboxes cross forest

    - by Aceth
    Having a hard time migrating user mailboxes across 2 forests. I've set up ADMT 3.2, No dns issues and fully route-able between the domains etc. Have come to migrate user mailboxes and the exchange shell just comes back with ... [PS] C:New-MoveRequest -Identity "username" -TargetDatabase "maildb" -RemoteGlobalCatalog 'gdc.doman.local' -RemoteCredential (get-credential) -TargetDeliveryDomain 'sourcedomain.local' Parameter set cannot be resolved using the specified named parameters. + CategoryInfo : InvalidArgument: (:) [New-MoveRequest], ParameterBindingException + FullyQualifiedErrorId : AmbiguousParameterSet,New-MoveRequest We are running a mixed environment (windows server 2003 and up with exchange 2003 and exchange 2010 (different servers obviously)) as a source domain and full Server 2008 R2 servers in the target domain with only 1 exchange 2010 server. We have ran this command on the Exchange 2010 server on the target domain and when asked giving the credentials of an admin in source domain in the format : sourcedomain\source_administrator Any help would be greatly appreciated Thanks Rhys

    Read the article

  • A whole site for reviewing of SQL Server MVP Deep Dives

    - by Rob Farley
    This book just keeps amazing me. Not only as I read through some chapters for the first time, and others for the second and third times, but also as I read reviews of it written by other people. The guys over at http://sqlperspectives.wordpress.com are a prime example. They’ve been going through each chapter, each writing a review on it, and often getting a guest blogger to write something as well – and they’re clearly getting a lot of stuff out of this brilliant book. Back when I first heard about them doing this, I had offered to be involved, and recently did an interview with them about my chapters (chapter seven and chapter forty). That interview can be found at http://sqlperspectives.wordpress.com/2010/03/20/interview-with-rob-farley/ – and covers how I got into databases, and how I think the database roles in the IT industry are changing. If you don’t have a copy of SQL Server MVP Deep Dives yet, why not get a copy from http://www.sqlservermvpdeepdives.com (or persuade your local bookstore to get some copies in), and read through chapters with these guys? Treat it like a book club, discussing each chapter with others (guest blogging perhaps?), and you’ll probably end up getting even more out of it. Remember that the proceeds of the book go to charity (instead of the authors – we get nothing), so you don’t need to consider that you’re splashing out on a treat for yourself. Think of the kids helped by War Child instead. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to mount a LOFS in Solaris that doesn’t cross mountpoints

    - by jcea
    I need to access my "root" ZFS dataset to delete a file under "/var". But "/var" is overlayed by another ZFS dataset. Since these are system datasets I can't "umount" them while the machine is running. And I want to avoid to reboot the system in "failsafe" mode, since this is a production machine. Teorically ZFS would refuse to mount "/var" dataset over the underlying "/var", because it is not empty. But it works, possibly because they are system datasets mounted early in the boot process. But having the underlying "/var" not empty is preventing me to create an ABE (Alternate Boot Environment), so patching is risky, and I can't upgrade my system using Live Upgrade. The machine is remote. I have an IP KVM, but I rather prefer to avoid booting this machine in "failsafe" mode, if I can. I know there is a file in "/var/" because I can snapshot the "root" dataset and check it. But snapshots are read-only, so I can't get rid of the file. I tried "mkdir /tmp/zzz; mount -F lofs / /tmp/zzz", but when I go to "/tmp/zzz/var", I see the "/var" dataset, not the underlying "root" dataset. That is, the LOFS is crossing mountpoints. I would usually like it, but not this time!. Any suggestion, beside rebooting the machine in "failsafe" and mess with it thru the IP KVM?

    Read the article

  • 5 Important On-Site SEO Tweaks

    The first part of any successful search engine optimization (SEO) campaign, is to fully optimize all the parts of your website to get a high keyword density for your keywords. This article will focus on five important elements that should be optimized on every page of your website.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >