Search Results

Search found 25514 results on 1021 pages for 'site blocking'.

Page 10/1021 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Can you cross-site ping another site using C# or JS/Ajax?

    - by Josh Harris
    On our web application I am trying to ping a 3rd party site to see if it is up before redirecting our customers to it. So far I have not seen a way to do this other than from a desktop app or system console. Is this possible? I have heard that there was an image trick in original ASP. Currently we are using .NET MVC with Javascript. Thank you, Josh

    Read the article

  • Modify site column(type text) to allow more than 255 characters

    - by alienavatar
    Hi friends I have created custom site column with type text and I included in one of content types. But it is allowing me for 255 characters only. In which way I can extend it, say 1024 characters. I did this before by mentioning somewhere in web.config file but I forgot how i did. Can anyone please tell me how can I achieve this. Thanks in advance.

    Read the article

  • using full domains in a multi-site application set up

    - by mike in africa
    hi, i'm building a multi-site application where a client must be able to use his own domain (as oppose to just a subdomain). i like to know the different ways to go about it, and what configuration is needed on both end when/if the client wishes to handle email hosting externally. any reference to lxadmin/hypervm would be helpful too. tx~ edit: i'm running apache; no ssl requirement.

    Read the article

  • How do I generate site map from MySQL database

    - by mathew
    Hi all I am new to Php and working on a project which needs to generate sitemaps from URLS which stored in MySQL database. how can I do this?? Is there any one have any idea?? site maps can be in php extension too and only requirement is list of sites display in each page

    Read the article

  • Migrating a Large amount of data from old publishing site to new site

    - by tommizzle
    Hi, I am currently in the process of creating a new news/publishing site on the Movable Type platform. There are around 20 or so sites with 20,000+ rows of data to be moved/aggregated to ~8 sites (we have a number of location specific sites and are going to aggregate the content from these into 1 single site for each niche). We have discussed how to do this and came to the conclusion that it would probably be better to hire somebody to do it (I could probably do it, but i'm limited on time and am sure that a specialist would be more efficient). So my questions to you guys are: 1) What kind of skill set should we look for in an applicant? 2) There will be a large amount of input from our side... is getting somebody to work remotely out of the question? 3) How long would a task like this traditionally take (I know this question is very subjective, but an estimation would be awesome)? 4) Do you have any recommendations for firms who would be able to take on a large task like this? Thanks in advance, Tom

    Read the article

  • UFW blocking random packets on 443

    - by s2jcpete
    All, I have UFW setup to allow traffic on port 443. It works as expected, though I have a large amount of UFW Block log entries. To Action From -- ------ ---- 80 ALLOW Anywhere 443 ALLOW Anywhere 22222 ALLOW Anywhere 80 ALLOW Anywhere (v6) 443 ALLOW Anywhere (v6) 22222 ALLOW Anywhere (v6) However in my syslog file I see this: [UFW BLOCK] IN=eth0 OUT= MAC=XXX SRC=<foreignip> DST=<serverip> LEN=40 TOS=0x00 PREC=0x00 TTL=116 ID=22025 DF PROTO=TCP SPT=49622 DPT=443 WINDOW=0 RES=0x00 ACK RST URGP=0 About 30 or so seconds later pound (which I'm using for SSL decryption and port redirection) throws a connection timed out messsage. I'm assuming this is because UFW is blocking the packet. I'm at a loss as to an explination. Could the packet be malformed or something, is this normal? Edit - I have since changed the /etc/defaults/ufw and set ipv6=no, so the v6 rules are no longer in the mix. The server is still showing the block / connection timed out behavior though. The new ufw status output is: Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 80 ALLOW IN Anywhere 443 ALLOW IN Anywhere 22222 ALLOW IN Anywhere

    Read the article

  • Jquery getJSON Not Working Cross Site

    - by CJ
    I have a piece of javascript that grabs JSON data. When executed locally everything seems to work fine. However, when I try accessing it from a different site, it doesn't work. Here's the script. $(function(){ var aT = new AjaxTest(); aT.getJson(); }); var AjaxTest = function() { this.ajaxUrl = "http://mydeveloperpage.com/sandbox/ajax_json_test/client_reciever.php"; this.getJson = function(){ $.getJSON(this.ajaxUrl, function(data){ $.each(data, function(i, piece){ alert(piece); }); }); } } You can find a copy of the exact same file at "http://mydeveloperpage.com/sandbox/ajax_json_test/". Any help would be greatly appreciated. Thanks!

    Read the article

  • CAPTCHA blocking for my scraping script?

    - by Surabhil Sergy
    I am working on a scraping project which involves getting web data and parsing them for further use. I have been working using PHP and CURL to make scraping scripts which crawls web data and I make use of either PHP Dom or Simple HTML DOM Parser library for these kinds of projects. On a recent project I encountered some challenges; initially I found the target website blocked my server IP such that the server could not make any successful requests to the site. Understanding these issues as common I bought a set of private proxies and tried to make request calls using them. Though this could get successful response, I noticed the script is getting some kind of blocks after 2-3 continuous requests. On printing and checking the response I could see a pop-up asking for CAPTCHA validation. I could not see any captcha characters to be entered and it also shows an error “input error: invalid referrer”. On examining the source I could see some Google recaptcha scripts within. I’m stuck at this point and I m not able to execute my script. My script is used for gathering data and it needs to go through a large number of pages periodically over the site. But in the current scenario I am not able to proceed with my script. I could see there are some options to overcome these captcha issues and scraping these kinds of sites too are common. I have been checking my script performance and responses over last two months. I could see during first month I was able to execute very large number of requests from a single IP and I was able to get results. Later I get an IP block and used private proxies which could get me some results. Later I am facing now with the captcha trouble. I would appreciate any help or suggestions in this regard. (Often in this kind of questions I used to get a first comment as, ‘Have you asked for prior permission from the target?’ .I haven’t ,but I know there are many sites doing so to get the details out of sites and target sites may not often give access to them. I respect the legality and scraping etiquettes but I would like to know at what point I stuck and how could I overcome that! ) I could provide any supporting information if needed.

    Read the article

  • I was adding a wordpress plugin when I received message : couldn't find constant VHOST, now site has

    - by jackie
    Can anyone help me get my site back? I was adding a site map plugin with wordpress and received the message Warning: constant() [function.constant]: Couldn't find constant VHOST in /home/content / xxxxxxxxxxx /html/wp-content/plugins/wordpress-mu-domain-mapping/domain_mapping.php on line 30 Fatal error: Call to undefined function is_site_admin() in /home/content/xxxxxxxxxxxxxxxxx/html/wp-content/plugins/wordpress-mu-domain-mapping/domain_mapping.php on line 33 Now I have no site? Can it be retrieved? Any advice would be greatly appreciated. Jackie

    Read the article

  • How to remove duplicate illegal site in apache configuration?

    - by zladuric
    I've recently found a referrer in the apache log on my site. Now, I opened it out of curiosity, since my site is live, but I just started development so I didn't expect it. Anyway, the site was a pure copy of mine, and after investigation I saw that it resolves to my IP. I'm on Ubuntu 12.04, Apache 2, drupal 7, don't know what other info can I provide. My question is: how can I tell apache that it should not serve this site? Thanks Edit: forgot to say that I had some bots register to my fresh drupal installation. Also, my domain is a tld, this fake domain is a third level (ie. sub.domain.de)

    Read the article

  • MODX based site has been compromised, and tagged by Google as malware

    - by JAG2007
    I'm the webmaster (inherited the site from the developer) for a site called kenbrook.org. The site is currently being tagged as malware infected by Google, and gives the following details: http://www.google.com/safebrowsing/diagnostic?site=kenbrook.org Sadly, this is the second time it has occurred. I posted the issue when it happened last year originally on Stackoverflow on this post, shortly after I inherited the site. At the time the fix was a simple removal of a few lines of code from a .js file, but I never did discover or resolve the vulnerability. The site is built on MODX, which neither I, nor the original builder, have any familiarity with. I've tried to check for security updates from MODX, but updating that software has been a real pain also. Sooo...what's my next step to getting this whole issue resolved? Or steps?

    Read the article

  • Transition to new site

    - by James Hill
    I'm almost finished rewriting the website for a non-profit organization. The existing site receives ~5,000 a month. The new site is being written in ASP.Net and the existing site is PHP. The current hosting provider does not support .Net hosting, so I'll be switching providers. My question revolves around the transition from the old site to the new. I would really like to get the new site up at the new hosting provider and do thorough testing before changing the DNS records for the domain. Question: How can I put the new site up, test it, make any changes/additions necessary before updating the domain DNS to point to the new IP without Google indexing the content? Also, what SEO repercussions should I be aware of when making such a drastic change to the content that exists under the domain name?

    Read the article

  • Recommend an open source CMS for single page web site

    - by RedMan
    Hi I want to create a single page web site like http://kiskolabs.com/ or http://www.carat.se to display my portfolio. I want to add new products after launching the site without having to edit the entire site. I've looked at opencart (too much for single page site), Magneto (more for ecommerce), Wordpress (couldn't find open source / free templates which i can start from). Can you suggest a CMS which will support the creation of a single page site and allow insertion of new products without having to edit the entire page? I would prefer a CMS which also has open source / free templates which I can tweak for my use. I can do php and mysql, xml. If it is an easier option I can do PSD to site (but don't know much about this at all).

    Read the article

  • Creating SharePoint sites from xml using Powershell

    - by Norgean
    It is frequently useful to create / delete web applications in a development environment. If you need to create a structure, this can quickly become tedious. Enter Powershell, xml and recursive functions. Create the structure in xml. Something like: <Sites>     <Site Name="Test 1" Url="Test1" />     <Site Name="Test 2" Url="Test2" >         <Site Name="Test 2 1" Url="Test21" >             <Site Name="Test 2 1 1" Url="Test211" />             <Site Name="Test 2 1 2" Url="Test212" />         </Site>     </Site>     <Site Name="Test 3" Url="Test3" >         <Site Name="Test 3 1" Url="Test31" />         <Site Name="Test 3 2" Url="Test32" />         <Site Name="Test 3 3" Url="Test33" >             <Site Name="Test 3 3 1" Url="Test331" />             <Site Name="Test 3 3 2" Url="Test332" />         </Site>         <Site Name="Test 3 4" Url="Test34" />     </Site> </Sites> Read this structure in Powershell, and recursively create the sites. Oh, and have cool progress dialogs, too. $snap = Get-PSSnapin | Where-Object { $_.Name -eq "Microsoft.SharePoint.Powershell" } if ($snap -eq $null) {     Add-PSSnapin "Microsoft.SharePoint.Powershell" } function CreateSites($baseUrl, $sites, [int]$progressid) {     $sitecount = $sites.ChildNodes.Count     $counter = 0     foreach ($site in $sites.Site)     {         Write-Progress -ID $progressid -Activity "Creating sites" -status "Creating $($site.Name)" -percentComplete ($counter / $sitecount*100)         $counter = $counter + 1         Write-Host "Creating $($site.Name) $($baseUrl)/$($site.Url)"         New-SPWeb -Url "$($baseUrl)/$($site.Url)" -AddToQuickLaunch:$false -AddToTopNav:$false -Confirm:$false -Name "$($site.Name)" -Template "STS#0" -UseParentTopNav:$true         if ($site.ChildNodes.Count -gt 0)         {             CreateSites "$($baseUrl)/$($site.Url)" $site ($progressid +1)         }         Write-Progress -ID $progressid -Activity "Creating sites" -status "Creating $($site.Name)" -Completed     } } # read an xml file $xml = [xml](Get-Content "C:\Projects\Powershell\sites.xml") $xml.PreserveWhitespace = $false CreateSites "http://$($env:computername)" $xml.Sites 1 Easy! Sensible real life implementations will also include templateid in the xml, will check for existence of a site before creating it, etc.

    Read the article

  • Geolocation Blocking and SEO

    - by Mahesh
    I'm in process to block a particular geolocation from accessing my site. But i am not sure if this is going to raise any flags on my website. Basically i'm getting lots of spam from particular state and there are some webmaster copying my content without any attribution or backlink. I feel that is enough and i wish to save my bandwidth on that state or geolocation. So my question is how to block particular geolocation ? Do you have any suggestions for php scripts that help block particular geolocation ? and another question: if there are any SEO disadvantages ?

    Read the article

  • URL Redirection in Multisite wordpress

    - by Toqeer
    We have multi-site wordpress containing more then 50 blogs/sub-site. Our base URL to wordpress site is www.example.com/base-site/ and we have other sub-sites in it like www.example.com/base-site/site1 site2 ... etc. Now My question is to redirect the main-site to one of the subsites but a simple redirect 301 is not working. I tried some solutions of mod-rewrite but its not working either for this main-site to redirect to sub-site. A solution is required to Redirect www.example.com/base-site/ to www.example.com/base-site/site1 Solution used so far but not working for me solution1 solution2

    Read the article

  • Subdomain not hitting new site?

    - by Abe Miessler
    I have an existing site at my.site.com and I would like to setup a staging subdomain (staging.my.site.com). I have my DNS setup and directing staging.my.site.com to my server, but for some reason the Web Site I created in IIS for it is not being hit. Instead when I go to staging.my.site.com it takes me to the original site. The site I created in IIS has a home directory that is totally different from the regular sites home directory. I have added one host header with the following information: IP Address: (All Unassigned) TCP port: 80 Host Header Value: staging.my.site.com I was under the impression that with the setup I described above hitting staging.my.site.com through a web browser would bring up the staging site, but it does not. Can anyone see what I am doing wrong? UPDATE: One thing I noticed is that in my A record I am mapping to the IP Address ( lets say 1.1.1.1 for this example). In the host headers for the main site (my.site.com) it has 1.1.1.1 as an entry. Is this normal? Could it cause the problem I am talking about?

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • A mechanism to include site title in every page, but not in <title> element

    - by Saeed Neamati
    Each site can have a name. For example, site x. Each page also can have a name (or a title) that should appear in <title> tag in the header. However, many websites out there use the combination site name - page name to provide the value for <title> tag. I find it a little far from being semantic. On the other hand, if you only include page title in <title> tag, search engines won't find your site by its name. For example, if your site's name is Thought Results and you don't include it in page titles, then if you search for Thought Results, you won't find your site in SERPs. Thus I'm searching for a mechanism to both include site title (not page title) in every page, and also only include page title in <title> tag to get more semantic results. Is there any way to achieve this?

    Read the article

  • Subdomain takes the position of main site in Google search result

    - by user3578586
    We have one domain and one sub-domain. Until last week both of them appear in first page of Google search for very important keyword. Unfortunately Google dropped our main domain from search results. our main site has been in first page for 5 years! About one year ago we build this sub-domain. It simply has been redirected to one of pages of main domain. For solving problem we upload a independent site for sub-domain because we guessed that Google think this is our main page of our site. But problem did not solved. What should we do? our main site offer main services and we we want that will be on first page. Shout down sub-domain? Redirect to main site? Put the link of our main site in sub-domain? (About one year ago we put link of this sub-domain to our main site. Google indexed it and continuously bring that to top.) changing in robots.txt ....

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >