Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 370/1209 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Embedding a CMS in an MVC Web App

    - by Mr Snuffle
    I'm working on a website for searching for businesses, then displaying a listing page. We've been toying with the idea of letting the clients manage their listing page using an external CMS. I'm not sure how often this is done, or if it's even best practice. Ideally, we want to be able to setup a listing on our website, then give the clients access to an external CRM when they can manage their listing page. We then want to embed this custom page within our website, possibly using an iframe (which will come along with it's own set of complications). We'd like this integration to be as seamless as possible. I'd personally prefer it if we could directly inject the HTML into our own page and bypass an iframe all together, but I don't know of any CMS hosting services that provide the interface for such a thing. We've experimented a little with Squarespace, and we can get a fairly clean version of someone's page which would be well suited for an iframe. I'm wondering if anyone else has looked and integrating an external hosting CMS into a website (in this case, we're using ASP.NET MVC). We'd also want to automate the creation of accounts on this external CMS, so when a user signed up we could just point them to the website with some login details. I have no idea if anyone offers a service like this, but any recommendations would be greatly appreciated. We could host a service ourself too, but the aim is to have an external system that clients can use to manage their pages. Cheers, James

    Read the article

  • Monitor SQL Server Replication Jobs

    - by Yaniv Etrogi
    The Replication infrastructure in SQL Server is implemented using SQL Server Agent to execute the various components involved in the form of a job (e.g. LogReader agent job, Distribution agent job, Merge agent job) SQL Server jobs execute a binary executable file which is basically C++ code. You can download all the scripts for this article here SQL Server Job Schedules By default each of job has only one schedule that is set to Start automatically when SQL Server Agent starts. This schedule ensures that when ever the SQL Server Agent service is started all the replication components are also put into action. This is OK and makes sense but there is one problem with this default configuration that needs improvement  -  if for any reason one of the components fails it remains down in a stopped state.   Unless you monitor the status of each component you will typically get to know about such a failure from a customer complaint as a result of missing data or data that is not up to date at the subscriber level. Furthermore, having any of these components in a stopped state can lead to more severe problems if not corrected within a short time. The action required to improve on this default settings is in fact very simple. Adding a second schedule that is set as a Daily Reoccurring schedule which runs every 1 minute does the trick. SQL Server Agent’s scheduler module knows how to handle overlapping schedules so if the job is already being executed by another schedule it will not get executed again at the same time. So, in the event of a failure the failed job remains down for at most 60 seconds. Many DBAs are not aware of this capability and so search for more complex solutions such as having an additional dedicated job running an external code in VBS or another scripting language that detects replication jobs in a stopped state and starts them but there is no need to seek such external solutions when what is needed can be accomplished by T-SQL code. SQL Server Jobs Status In addition to the 1 minute schedule we also want to ensure that key components in the replication are enabled so I can search for those components by their Category, and set their status to enabled in case they are disabled, by executing the stored procedure MonitorEnableReplicationAgents. The jobs that I typically have handled are listed below but you may want to extend this, so below is the query to return all jobs along with their category. SELECT category_id, name FROM msdb.dbo.syscategories ORDER BY category_id; Distribution Cleanup LogReader Agent Distribution Agent Snapshot Agent Jobs By default when a publication is created, a snapshot agent job also gets created with a daily schedule. I see more organizations where the snapshot agent job does not need to be executed automatically by the SQL Server Agent  scheduler than organizations who   need a new snapshot generated automatically. To assure this setting is in place I created the stored procedure MonitorSnapshotAgentsSchedules which disables snapshot agent jobs and also deletes the job schedule. It is worth mentioning that when the publication property immediate_sync is turned off then the snapshot files are not created when the Snapshot agent is executed by the job. You control this property when the publication is created with a parameter called @immediate_sync passed to sp_addpublication and for an existing publication you can use sp_changepublication. Implementation The scripts assume the existence of a database named PerfDB. Steps: Run the scripts to create the stored procedures in the PerfDB database. Create a job that executes the stored procedures every hour. -- Verify that the 1_Minute schedule exists. EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 13; /* LogReader */ -- Verify all replication agents are enabled. EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 13; /* LogReader */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 11; /* Distribution clean up */ -- Verify that Snapshot agents are disabled and have no schedule EXEC PerfDB.dbo.MonitorSnapshotAgentsSchedules; Want to read more of about replication? Check at my replication posts at my blog.

    Read the article

  • [solved] PHP-called hyperlink stopped showing when CSS table implemented

    - by Luke
    EDIT: Solved - was not flutter's tag stripping, should work as advertised. I'm using Flutter (which creates custom fields) in Wordpress to display profile information entered as a Post. Before I implemented the CSS tables the link showed up and was clickable. Now I get nothing returned, even when I try to call the link outside the table. If you know anything about this, here's my code in the index.php file and I remain available for any questions. <?php if (in_category('Profile')) { ?> <table id="mytable" cellspacing="0"> -snip- <tr> <th class="row1" valign="top">Website </td> <td>Link: <a href="<?php echo get_post_meta($post->ID, 'FrWebsite', $single=true) ?>"> <?php echo get_post_meta($post->ID, 'FrWebsite', $single=true) ?></a></td> </tr> -snip- </table> Thanks, L Edit: @Josh - there is a foreach looping construct in the table and it is reading and displaying the code correctly, I see what you're getting at now: <tr> <th class="row2" valign="top">Specialities </td> <td class="alt" valign="top"><?php $my_array = get('Expertise'); $output = ""; foreach($my_array as $check) { $output .= "<span>$check</span><br/> "; } echo $output; ?></td> </tr> Edit - @Josh - here's the old code as far as I can remember it, there was no major difference just a <td> tag where there now stands a <th>, there wasn't the class="" and there was no "Link:" and FrWebsite was called Website, but it still didn't work when called Website so I changed to see if that was the error. <tr> <td width="200" valign="top">Website </td> <td><a href="<?php echo get_post_meta($post->ID, 'Website', $single=true) ?>"><?php echo get_post_meta($post->ID, 'Website', $single=true) ?></a></td> </tr>

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

  • Age verification forms and crawlers

    - by user333763
    I have created a website about some beer brand and had to include age verification page. The verification script is written in PHP and uses sessions to store verification variable. The script works the way that no matter form which link you will try to enter the website it will take you to the verification page first. The verification is very simple. There are 2 button: "I'm under 21" and "I'm over 21". If you click the latter, you can browse the website. After some time I discovered that the web crawlers are not able to get past verification page. I checked the website in Google webmaster tools and the only text content scanned was from the verification page. I read somewhere that crawlers are not able to submit form buttons, is it true? Considering the fact that age verification pages are useless anyways, maybe I should just leave it as a starting page but don't forbid going around it, e.g. from links to the subpages?

    Read the article

  • How to make Facebook Authentication from Silverlight secure?

    - by SondreB
    I have the following scenario I want to complete: Website running some HTTP(S) services that returns data for a user. Same website is additionally hosting a Silverlight 4 app which calls these services. The Silverlight app is integrating with Facebook using the Facebook Developer Toolkit (http://facebooktoolkit.codeplex.com/). I have not fully decided whether I want Facebook-integration to be a "opt-in" option such as Spotify, or if I want to "lock" down my service with Facebook-only authentication. That's another discussion. How do I protect my API Key and Secret that I receive from Facebook in a Silverlight app? To me it's obvious that this is impossible as the code is running on the client, but is there a way I can make it harder or should I just live with the fact that third parties could potentially "act" as my own app? Using the Facebook Developer Toolkit, there is a following C# method in Silverlight that is executed from the JavaScript when the user has fully authenticated with Facebook using the Facebook Connect APIs. [ScriptableMember] public void LoggedIn(string sessionKey, string secret, int expires, long userId) { this.SessionKey = sessionKey; this.UserId = userId; Obvious the problem here is the fact that JavaScript is injection the userId, which is nothing but a simple number. This means anyone could potentially inject a different userId in JavaScript and have my app think it's someone else. This means someone could hijack the data within the services running on my website. The alternative that comes to mind is authenticating the users on my website, this way I'm never exposing any secrets and I can return an auth-cookie to the users after the initial authentication. Though this scenario doesn't work very well in an out-of-browser scenario where the user is running the Silverlight app locally and not from my website.

    Read the article

  • css name should be?

    - by kc rajput
    i am making style sheet for a website. css style name should be related to website or content? my website is about web development.is that right to use style name- #web-development-header .web-development-company-london-content or should use #header .content is css style name can help for seo?

    Read the article

  • How to pass URL variables into a WordPress page

    - by mikemick
    This has been asked on here (over a year ago), but apparently not answered, and WordPress is always evolving so maybe theres a good solution now. I want to pass variables to a WordPress page via the url (similar to CodeIgniter uri helper segments). Currently I can do this... My profile page is: http://website.com/profile I can pass a variable like this: http://website.com/profile?username=johndoe I want to pass the variable in like this: http://website.com/profile/johndoe There has to be some sort of helper function, right?

    Read the article

  • How do I pre-empt IIS StaticFileHandler with my own HttpHandler

    - by Hmobius
    Hi there, I'm building a website referencing an assembly that contains several js scripts as embedded resources, some controls which use those scripts and a 'HttpResourcesHandler' that knows how to retrieve those embedded scripts when asked for them. In web.config I have an entry in the section that reads <add verb="*" path="/embedded/controlscripts/*" validate="false" type="CWeb.Controls.Web.HttpResourcesHandler, CWeb.Controls.EditBox" /> This control adn the website work absolutely fine when debugging with the Visual Studio web development server, but if I then switch the website to run under IIS (v7 - I'm running Vista and there appears to be no problem with debugging the site using IIS 5 or 6), the control can no longer access the scripts. I get a HTTP 404.0 error screen indicating the StaticFileHandler cannot find the file. I know that - it's embedded. So the StaticFileHandler appears to be grabbing the request for the script before my own and returning a 404. How do I tell IIS to use my own resource handler for the embedded/controlscripts directory rather than the staticfilehandler? I'm running the website in classic mode by the way.

    Read the article

  • Reference 3.5 assembly from 4.0 winforms phail

    - by Dean Lunz
    So I have this utility library that is compiled as a dll under .net 3.5 and it is used by my asp.net 3.5 website. I created a .net 4.0 winforms app to push data onto the website. I want to make use of the functionality in the utilities library from this winforms app. The problem lies in that when I make reference to the utilities library and use the code in it intellisense barks at me saying that it can't find the objects in that library. The thing is I would switch the winforms app to 3.5 which fixes the problem, but I am using Tasks which require 4.0. And because my website and utilities library both run 3.5 and my website is hosted at godaddy that currently only supports asp.net 3.5 so compiling my utilities library under 4.0 for my winforms app is not going to work because it breaks my website. I have tried the app.config trick ala useLegacyV2RuntimeActivationPolicy="true" ... But that did not help. Obviously I could start a new utilities project for 4.0 and and copy the code files from the existing utilities library then reference the new 4.0 utilities library in my winforms app but, that strikes me as being rather overkill when all I want to do is reference the library and use it's functionality. Not to mention that I would have two utility libraries both containing the exact same code, and if I update the code in one I will need to make sure that the other is also updated. I could use add file as link, but you get the idea. So is there anything else I could try or any other way to solve or get around this? Or am I just going to have to break down and create a identical clone of the utilities library for 4.0.

    Read the article

  • Downloading a file in ASP.NET (through the server) while streaming it to the user

    - by James Teare
    My ASP.NET website currently downloads a file from a remote server to a local drive, although when users access the site they have to wait for the server to finish downloading the file until they can then download the file from my ASP.NET website. Is it possible to almost stream the download from the remote website - through my ASP.NET website directly to the user (a bit like a proxy) ? My current code is as follows: using (var client = new WebClientEx()) { client.DownloadFile(downloadURL, "outputfile.zip"); } WebClient class: public class WebClientEx : WebClient { public CookieContainer CookieContainer { get; private set; } public WebClientEx() { CookieContainer = new CookieContainer(); } protected override WebRequest GetWebRequest(Uri address) { var request = base.GetWebRequest(address); if (request is HttpWebRequest) { (request as HttpWebRequest).CookieContainer = CookieContainer; } return request; } }

    Read the article

  • Upgrade SSIS 2005 Packages to SSIS 2008

    There are several enhancements in SSIS 2008 such as enhanced lookup transformation, the development environment for Script Task and Script Component changing from VSA to VSTA, etc. If you intend to upgrade your SSIS 2005 packages to SSIS 2008 ... [Read Full Article]

    Read the article

  • Software for building a sitemap.

    - by UXdesigner
    If I had to create a content inventory for a website that doesn't have a sitemap, and I do not have access to modify the website, but the site is very large. How can I build a sitemap out of that website without having to browse it entirely ? I tried with Visio's sitemap builder, but it fails great time. Let's say for example: I Want to create a sitemap of Stackoverflow. Do you guys know a software to build it ?

    Read the article

  • System.Web.Routing.UrlRoutingModule and Ingragistics WebHtmlEditor - one or other not both

    - by Krishna
    Hi- I have a website created in VS 2008 using asp.net v3.5. My requirement is to add both Routing and Infragistics controls the website. But noticed, I can have only one of these working - not both. My infragistics control works if I remove the following from web.config But by removing it- routing would not work. Is there a way I can have both infragistics and routing working in a single website? Thanks so much.

    Read the article

  • What does ~ in the beginning of an URL in asp.net exactly do ?

    - by MarceloRamires
    I am editing a certain website which before used the port 80 (default) that was not required at the url (because it's default..) But the port had (for technical reasons) to be changed, and now it has to be informed. I can access the main page through ip:port\page like this: 1.2.3.4:81\page.aspx Every link in the website is composed like this: <asp:HyperLink runat="server" Text="random" NavigateUrl="~/fdr/whatever.aspx" /> And whenever I click on a link, the page doesn't load, but the URL is composed on the URL bar of the browser, then I simply add ":80" after the IP in the URL and it works. Due to the existance of querystrings (in other words, for already having access to the URL) I before thought that '~' in the beginning of a URL in a link was saying "keep in the same website, just change to this webpage in this folder", but if the port vanishes, I assume now that the address is requested (probably to IIS) the location of the current website. I want to know then (instead of having to add the port to each link in my website) how do I set up whoever is requested by the ~ in the link to add the port somehow. How do I do that?

    Read the article

  • server caching problem on ASP.NET MVC page

    - by Rita
    Hi I have server caching error on ASP.NET MVC Pages. The scenario is like this. I have two applications (1).External Website and (2).Internal Adminsite, both pointing to the same Database. There is one page called EditProfile Page on the External Website that registered customer can update his profile information like Firstname, Lastname and Address…etc. Similarly there is similar functionality on the Internal Adminsite on the page called CustomerProfile Page where the Site Admin can update all these fields. When the user updates the profile information from the Adminsite, those updates are not reflecting back to the Website. Now I tried restarting the Website on IIS and that din’t help. Again I tried both restarting the Website on IIS and opening a new browser, then those updates are reflecting back. I am wondering how I can come out of this caching problem without restarting the site and open a new browser window everytime? Are there any IIS settings that could help? This caching is happening only on couple of tables only and all the updates are showing up in the database. Appreciate your responses. Thanks

    Read the article

  • TFS Folders - Getting them to work like Subversion "Trunk/Tags/Branches"

    - by Sam Schutte
    I recently started using Team Foundation Server, and am having some trouble getting it to work the way I want it to. I've used Subversion for a couple years now, and love the way it works. I always set up three folders under each project, Trunk, Tags, and Branches. When I'm working on a project, all my code lives under a folder called "C:\dev\projectname". This "projectname" folder can be made to point to either trunk, or any of the branches or tags using Subversion (with the switch command). Now that I'm using TFS (my client's system), I'd like things to work the same way. I created a "Trunk" folder with my project in it, and mapped "Project/Trunk/Website" to "c:\dev\Website". Now, I want to make a release under the "tags" folder (located in "Project/Tags/Version 1.0/Website", and TFS is giving me the following error when I execute the branch command: "No appropriate mapping exists for $Project/tags/Version 1.0/Website" From what I can find on the internet, TFS expects you to have a mapping to your hard drive at the root of the project (the "Project" folder in my case), and then have all the source code that lives in trunk, tags and branches all pulled down to your hard drive. This sucks because it requires way too much stuff on your hard drive, and even worse, when you are working in a solution in Visual Studio, you won't be able to pull down "Version 2.0" and have all your project references to other projects work, because they'll all be pointing to "trunk" folders under the main folder, not just the main folder itself. What I want to do is have the root "Project/Website" folder on my hard drive, and be able to have it point to (mapped to) either tags, branches, or trunk, depending on what i'm doing, without having to screw around with fixing Visual Studio project references. Ideas?

    Read the article

  • Using The Data Mining Query Task in SSIS

    SQL Server Integration Services (SSIS) is a Business Intelligence tool which can be used by database developers or administrators to perform Extract, Transform & Load (ETL) operations. In my previous article Using Analysis Services Processing Task & Analysis Services ... [Read Full Article]

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >