Search Results

Search found 13788 results on 552 pages for 'instance'.

Page 249/552 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • "SC.EXE config" and dollar sign in service name

    - by Joe Schmoe
    Say I want to make Windows Service startup dependent on SQL Server. In my case service name for SQL Server is MSSQL$SQL11 (SQL11 is SQL Server instance name) However, when I issue this command: SC.EXE config MyService depend= MSSQL$SQL11 everything after the dollar sign is ignored. When I go to "Dependencies" tab in "Services" SQL Server is not listed. When I check matching registry key it becomes clear why: it only has MSSQL. At this point I have to edit registry by hand to change MSSQL to MSSQL$SQL11 and then everything works as expected. Putting quotes around MSSQL$SQL11 doesn't help. Is there a way to specify $ in the middle of the SC.EXE argument string?

    Read the article

  • Questions, Knowledge Checks and Assessments

    - by ted.henson
    Questions should be used to reinforce concepts throughout the title. You have the option to include questions in the course, in assessments, in Knowledge Checks, or in any combination. Questions are required for creating knowledge checks and assessments. It is important to remember that questions that are not in assessments are not tracked. Be sure to structure your outline so that questions are added to the appropriate assignable unit. I usually recommend that questions appear directly below their relative section. This serves two purposes. First, it helps ensure that the related content and question stay relative to one another. Secondly, it ensures that when the "link to subject" option is used it will relate back to the relative content. Knowledge checks are created using the questions that have been added to the related assignable unit. Use Knowledge Checks to give users an additional opportunity to review what they have learned. Knowledge Check allows users to check their own knowledge without being tracked or scored. Many users like having this self check option, especially if they know they are going to be tested later. Each assignable unit can have its own Knowledge Check. Assessments provide a way to measure knowledge or understanding of the course material. The results of each assessment are scored and tracked. Assessments are created using the questions that have been added to the relative assignable unit(s). Each assignable unit, including the Title AU, can have multiple assessments. Consider how your knowledge paths will be structured when planning your assessments. For instance, you can create a multiple-activity knowledge path, with multiple assessments from the same title or assignable unit. Also remember, in Manager an assessment can be either a pre or post assessment. Pre-assessments allow the student to discover what is already known in a specific topic or subject and important if the personal course feature is being used. Post-assessments allow you test the student knowledge or understanding after completing the material.

    Read the article

  • Amazon ec2 - WildCard Sub-Domain

    - by Sharanc25
    I'm running an ec2 instance on ubuntu running lamp stack. I configured my httpd.conf file to support wildcard sub-domain but it didn't work. My httpd.conf file NameVirtualHost * <VirtualHost *> DocumentRoot /www/example ServerName example.com ServerAlias *.example.com </VirtualHost> I tried all possible solutions but they didn't work. Finally I used amazon Route-53 to setup a wildcard DNS to redirect all *.example.com to example.com. My question is, Is it okay if I use Route-53 instead of httpd.conf file for wildcard Sub-Domain ? Is there an error in my httpd.conf file ? (Note: I used the same httpd.conf settings with another hosting provider and it worked perfectly there.) Additional Information : VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server example.com (/etc/apache2/httpd.conf:1) port 80 namevhost example.com (/etc/apache2/httpd.conf:1) port 80 namevhost ip-xx-xxx-xx-xxx.ec2.internal (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Multiple VLANs, multiple subnets, single DHCP server?

    - by EightQuarterBit
    Hey guys! At my job we are prepping to transition from multiple LANs connected over slow VPN connections to a single MAN connected over fiber, and I've got a few questions. First of all, we are planning on making each physical site its own VLAN, but we would like to have a single DHCP server at the data center hand out IPs to each VLAN. We've pretty much got the VLAN tagging structure all worked out, but we would like to have our single DHCP server assign different subnets of IPs to each VLAN. For instance, VLAN 2 gets 10.0.2.x through 10.0.4.x, VLAN 3 gets 10.0.5.x through 10.0.7.x etc. We are an Active Directory based shop and we have a Server 2003 box handling DHCP (though we aren't averse to upgrading it to server 2008.) Is this feasible, or am I pipe-dreaming?

    Read the article

  • Unity – Part 5: Injecting Values

    - by Ricardo Peres
    Introduction This is the fifth post on Unity. You can find the introductory post here, the second post, on dependency injection here, a third one on Aspect Oriented Programming (AOP) here and the latest so far, on writing custom extensions, here. This time we will talk about injecting simple values. An Inversion of Control (IoC) / Dependency Injector (DI) container like Unity can be used for things other than injecting complex class dependencies. It can also be used for setting property values or method/constructor parameters whenever a class is built. The main difference is that these values do not have a lifetime manager associated with them and do not come from the regular IoC registration store. Unlike, for instance, MEF, Unity won’t let you register as a dependency a string or an integer, so you have to take a different approach, which I will describe in this post. Scenario Let’s imagine we have a base interface that describes a logger – the same as in previous examples: 1: public interface ILogger 2: { 3: void Log(String message); 4: } And a concrete implementation that writes to a file: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: #region ILogger Members 10:  11: public void Log(String message) 12: { 13: using (Stream file = File.OpenWrite(this.Filename)) 14: { 15: Byte[] data = Encoding.Default.GetBytes(message); 16: 17: file.Write(data, 0, data.Length); 18: } 19: } 20:  21: #endregion 22: } And let’s say we want the Filename property to come from the application settings (appSettings) section on the Web/App.config file. As usual with Unity, there is an extensibility point that allows us to automatically do this, both with code configuration or statically on the configuration file. Extending Injection We start by implementing a class that will retrieve a value from the appSettings by inheriting from ValueElement: 1: sealed class AppSettingsParameterValueElement : ValueElement, IDependencyResolverPolicy 2: { 3: #region Private methods 4: private Object CreateInstance(Type parameterType) 5: { 6: Object configurationValue = ConfigurationManager.AppSettings[this.AppSettingsKey]; 7:  8: if (parameterType != typeof(String)) 9: { 10: TypeConverter typeConverter = this.GetTypeConverter(parameterType); 11:  12: configurationValue = typeConverter.ConvertFromInvariantString(configurationValue as String); 13: } 14:  15: return (configurationValue); 16: } 17: #endregion 18:  19: #region Private methods 20: private TypeConverter GetTypeConverter(Type parameterType) 21: { 22: if (String.IsNullOrEmpty(this.TypeConverterTypeName) == false) 23: { 24: return (Activator.CreateInstance(TypeResolver.ResolveType(this.TypeConverterTypeName)) as TypeConverter); 25: } 26: else 27: { 28: return (TypeDescriptor.GetConverter(parameterType)); 29: } 30: } 31: #endregion 32:  33: #region Public override methods 34: public override InjectionParameterValue GetInjectionParameterValue(IUnityContainer container, Type parameterType) 35: { 36: Object value = this.CreateInstance(parameterType); 37: return (new InjectionParameter(parameterType, value)); 38: } 39: #endregion 40:  41: #region IDependencyResolverPolicy Members 42:  43: public Object Resolve(IBuilderContext context) 44: { 45: Type parameterType = null; 46:  47: if (context.CurrentOperation is ResolvingPropertyValueOperation) 48: { 49: ResolvingPropertyValueOperation op = (context.CurrentOperation as ResolvingPropertyValueOperation); 50: PropertyInfo prop = op.TypeBeingConstructed.GetProperty(op.PropertyName); 51: parameterType = prop.PropertyType; 52: } 53: else if (context.CurrentOperation is ConstructorArgumentResolveOperation) 54: { 55: ConstructorArgumentResolveOperation op = (context.CurrentOperation as ConstructorArgumentResolveOperation); 56: String args = op.ConstructorSignature.Split('(')[1].Split(')')[0]; 57: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 58: ConstructorInfo ctor = op.TypeBeingConstructed.GetConstructor(types); 59: parameterType = ctor.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 60: } 61: else if (context.CurrentOperation is MethodArgumentResolveOperation) 62: { 63: MethodArgumentResolveOperation op = (context.CurrentOperation as MethodArgumentResolveOperation); 64: String methodName = op.MethodSignature.Split('(')[0].Split(' ')[1]; 65: String args = op.MethodSignature.Split('(')[1].Split(')')[0]; 66: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 67: MethodInfo method = op.TypeBeingConstructed.GetMethod(methodName, types); 68: parameterType = method.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 69: } 70:  71: return (this.CreateInstance(parameterType)); 72: } 73:  74: #endregion 75:  76: #region Public properties 77: [ConfigurationProperty("appSettingsKey", IsRequired = true)] 78: public String AppSettingsKey 79: { 80: get 81: { 82: return ((String)base["appSettingsKey"]); 83: } 84:  85: set 86: { 87: base["appSettingsKey"] = value; 88: } 89: } 90: #endregion 91: } As you can see from the implementation of the IDependencyResolverPolicy.Resolve method, this will work in three different scenarios: When it is applied to a property; When it is applied to a constructor parameter; When it is applied to an initialization method. The implementation will even try to convert the value to its declared destination, for example, if the destination property is an Int32, it will try to convert the appSettings stored string to an Int32. Injection By Configuration If we want to configure injection by configuration, we need to implement a custom section extension by inheriting from SectionExtension, and registering our custom element with the name “appSettings”: 1: sealed class AppSettingsParameterInjectionElementExtension : SectionExtension 2: { 3: public override void AddExtensions(SectionExtensionContext context) 4: { 5: context.AddElement<AppSettingsParameterValueElement>("appSettings"); 6: } 7: } And on the configuration file, for setting a property, we use it like this: 1: <appSettings> 2: <add key="LoggerFilename" value="Log.txt"/> 3: </appSettings> 4: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 5: <container> 6: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 7: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 8: <lifetime type="singleton"/> 9: <property name="Filename"> 10: <appSettings appSettingsKey="LoggerFilename"/> 11: </property> 12: </register> 13: </container> 14: </unity> If we would like to inject the value as a constructor parameter, it would be instead: 1: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 2: <sectionExtension type="MyNamespace.AppSettingsParameterInjectionElementExtension, MyAssembly" /> 3: <container> 4: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 5: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 6: <lifetime type="singleton"/> 7: <constructor> 8: <param name="filename" type="System.String"> 9: <appSettings appSettingsKey="LoggerFilename"/> 10: </param> 11: </constructor> 12: </register> 13: </container> 14: </unity> Notice the appSettings section, where we add a LoggerFilename entry, which is the same as the one referred by our AppSettingsParameterInjectionElementExtension extension. For more advanced behavior, you can add a TypeConverterName attribute to the appSettings declaration, where you can pass an assembly qualified name of a class that inherits from TypeConverter. This class will be responsible for converting the appSettings value to a destination type. Injection By Attribute If we would like to use attributes instead, we need to create a custom attribute by inheriting from DependencyResolutionAttribute: 1: [Serializable] 2: [AttributeUsage(AttributeTargets.Parameter | AttributeTargets.Property, AllowMultiple = false, Inherited = true)] 3: public sealed class AppSettingsDependencyResolutionAttribute : DependencyResolutionAttribute 4: { 5: public AppSettingsDependencyResolutionAttribute(String appSettingsKey) 6: { 7: this.AppSettingsKey = appSettingsKey; 8: } 9:  10: public String TypeConverterTypeName 11: { 12: get; 13: set; 14: } 15:  16: public String AppSettingsKey 17: { 18: get; 19: private set; 20: } 21:  22: public override IDependencyResolverPolicy CreateResolver(Type typeToResolve) 23: { 24: return (new AppSettingsParameterValueElement() { AppSettingsKey = this.AppSettingsKey, TypeConverterTypeName = this.TypeConverterTypeName }); 25: } 26: } As for file configuration, there is a mandatory property for setting the appSettings key and an optional TypeConverterName  for setting the name of a TypeConverter. Both the custom attribute and the custom section return an instance of the injector AppSettingsParameterValueElement that we implemented in the first place. Now, the attribute needs to be placed before the injected class’ Filename property: 1: public class FileLogger : ILogger 2: { 3: [AppSettingsDependencyResolution("LoggerFilename")] 4: public String Filename 5: { 6: get; 7: set; 8: } 9:  10: #region ILogger Members 11:  12: public void Log(String message) 13: { 14: using (Stream file = File.OpenWrite(this.Filename)) 15: { 16: Byte[] data = Encoding.Default.GetBytes(message); 17: 18: file.Write(data, 0, data.Length); 19: } 20: } 21:  22: #endregion 23: } Or, if we wanted to use constructor injection: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: public FileLogger([AppSettingsDependencyResolution("LoggerFilename")] String filename) 10: { 11: this.Filename = filename; 12: } 13:  14: #region ILogger Members 15:  16: public void Log(String message) 17: { 18: using (Stream file = File.OpenWrite(this.Filename)) 19: { 20: Byte[] data = Encoding.Default.GetBytes(message); 21: 22: file.Write(data, 0, data.Length); 23: } 24: } 25:  26: #endregion 27: } Usage Just do: 1: ILogger logger = ServiceLocator.Current.GetInstance<ILogger>("File"); And off you go! A simple way do avoid hardcoded values in component registrations. Of course, this same concept can be applied to registry keys, environment values, XML attributes, etc, etc, just change the implementation of the AppSettingsParameterValueElement class. Next stop: custom lifetime managers.

    Read the article

  • Sharepoint 2007 Event ID 6482

    - by Dave M
    Our two server SharePoint 2007 SP2 farm has an issue. Event ID 6482 appears in the Application log of the Web front end many times a day. Often many time a minute. The full error is from Office SharePoint Server Event Type: Error Event Source: Office SharePoint Server Event Category: Office Server Shared Services Event ID: 6482 Date: 11/12/2009 Time: 3:05:22 PM User: N/A Computer: XXXXXX Description: Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (36a9b7ef-59aa-4f94-8887-8bf7b56f2f91). Reason: Error during encryption or decryption. System error code 0. Techinal Support Details: System.ArgumentException: Error during encryption or decryption. System error code 0. at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.SynchronizeDefaultContentSource(IDictionary applications) at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.Synchronize() at Microsoft.Office.Server.Administration.ApplicationServerJob.ProvisionLocalSharedServiceInstances(Boolean isAdministrationServiceJob) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. The SharePoint site appears to be functioning normally and Search returns expected results. Any suggestions would be appreciated

    Read the article

  • Performance Testing through distributed jmeter instances and bamboo

    - by user1617754
    I´m working on performance test for several services running in an Amazon network. Our architecture is: Continuous Integration server running in our facilities (Bamboo); A Jmeter server instance in the same network than the services to test; A Jmeter client connected to the JMeter server (ssh tunnels) in our facilities. I want to start the execution of tests from bamboo, and see the different results on it too. Bamboo with <---------> Jmeter server <--------> WebService Jmeter client on Amazon on Amazon Has anybody tried something like this?

    Read the article

  • open_basedir vs sessions

    - by liquorvicar
    On a virtual hosting server I have the open_basedir set to .:/path/to/vhost/web:/tmp:/usr/share/pear for each virtual host. I have a client who's running WordPress and he's complaining about open_basedir errors thus: PHP WARNING: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/var/lib/php/session/sess_42k7jn3vjenj43g3njorrnrmf2) is not within the allowed path(s): (.:/path/to/vhost/web:/tmp:/usr/share/pear) So the PHP session save_path isn't included in open_basedir but sessions across all sites on the server seems to be working fine apart from in this intermittent instance. I thought that perhaps the default session handler ignored open_basedir and this warning was caused by WP accessing the session file directly. However from what I can see PHP 5.2.4 introduced open_basedir checking to the session.save_path config: http://www.php.net/ChangeLog-5.php#5.2.4 (I am on PHP 5.2.13). Any ideas?

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • Installing ZendServer on IIS6

    - by Wayne M
    I need to run ASP and PHP applications side by side, so I installed ZendServer Community Edition on our existing Server 2003 platform. I specified it to use the existing IIS instance so I don't have to deal with multiple ports (we use ZoneEdit to manage our DNS and they don't seem to allow anything other than port 80 without forwarding). The install went smooth but when I try to configure and manage the install by going to http://localhost/ZendServer (or even just localhost) I get a Bad Request (Invalid Hostname) error. I haven't done anything except install the server, and this is my first time working with ZendServer. How do I fix this so I can get things set up?

    Read the article

  • Best Practices for MVC Architecture

    - by Mystere Man
    There are a number of questions on StackOverflow regard MVC best practices, but most of those seem to revolve around things like using Dependancy Injection, or creating helper functions, or do's and don'ts of what to do in views and controllers. My question is more about how to architect an MVC application. For example, we are encouraged to use DI with the Repository pattern to decouple data access from the controller, however very little is said on HOW to do that specifically for MVC. Where would we place the Repository classes, for instance? They don't seem to be model related specifically, since the model should likewise be relatively decoupled from the actual data access technologies. A second question involves how to structure the layers or tiers. Most example applications (Nerd dinner, Music Store, etc..) all seem to use a single tier, 2 layer approach (not counting tests) that typically has controllers directly calling L2S or EF code. If I want to create a multi-tier/layer aplication what are some of the best practices there in regards to MVC? This question is one-part standard q-a, but another part best-practices, so it could go either here or programmers.se, I am marking it CW. If you feel it would be better suited to programmers.se then it can be migrated. EDIT: What happened to the Community Wiki option? It seems to be gone.

    Read the article

  • Dissecting ASP.NET Routing

    The ASP.NET Routing framework allows developers to decouple the URL of a resource from the physical file on the web server. Specifically, the developer defines routing rules, which map URL patterns to a class or ASP.NET page that generates the content. For instance, you could create a URL pattern of the form Categories/CategoryName and map it to the ASP.NET page ShowCategoryDetails.aspx; the ShowCategoryDetails.aspx page would display details about the category CategoryName. With such a mapping, users could view category about the Beverages category by visiting www.yoursite.com/Categories/Beverages. In short, ASP.NET Routing allows for readable, SEO-friendly URLs. ASP.NET Routing was first introduced in ASP.NET 3.5 SP1 and was enhanced further in ASP.NET 4.0. ASP.NET Routing is a key component of ASP.NET MVC, but can also be used with Web Forms. Two previous articles here on 4Guys showed how to get started using ASP.NET Routing: Using ASP.NET Routing Without ASP.NET MVC and URL Routing in ASP.NET 4.0. This article aims to explore ASP.NET Routing in greater depth. We'll explore how ASP.NET Routing works underneath the covers to decode a URL pattern and hand it off the the appropriate class or ASP.NET page. Read on to learn more! Read More >

    Read the article

  • Dissecting ASP.NET Routing

    The ASP.NET Routing framework allows developers to decouple the URL of a resource from the physical file on the web server. Specifically, the developer defines routing rules, which map URL patterns to a class or ASP.NET page that generates the content. For instance, you could create a URL pattern of the form Categories/CategoryName and map it to the ASP.NET page ShowCategoryDetails.aspx; the ShowCategoryDetails.aspx page would display details about the category CategoryName. With such a mapping, users could view category about the Beverages category by visiting www.yoursite.com/Categories/Beverages. In short, ASP.NET Routing allows for readable, SEO-friendly URLs. ASP.NET Routing was first introduced in ASP.NET 3.5 SP1 and was enhanced further in ASP.NET 4.0. ASP.NET Routing is a key component of ASP.NET MVC, but can also be used with Web Forms. Two previous articles here on 4Guys showed how to get started using ASP.NET Routing: Using ASP.NET Routing Without ASP.NET MVC and URL Routing in ASP.NET 4.0. This article aims to explore ASP.NET Routing in greater depth. We'll explore how ASP.NET Routing works underneath the covers to decode a URL pattern and hand it off the the appropriate class or ASP.NET page. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Dedicated server configuration - some tips -Xenserver/Debian

    - by Sanjay S
    I am migrating from a vps based hosting to a dedicated hosting (8GB RAM/1TB HD). I need to run multiple Drupal and Ruby based applications? what would be the recommended configuration. I was thinking of two options. 1) Install multiple Debian os on top of Xen (like VPS). Each may be 2GB Memory and run Drupal and Ruby and MYSQL on separate partitions . 2) Install one instance of Debian. and Install Drupal (Apache, php) Ruby (lighttpd, ruby) ,MySQL all in the same partition I was little worried that option 2 could lead to some performance issues later..

    Read the article

  • Apache2 mod_proxy and post-multipart size

    - by Pietro
    Hi, I have Apache2 configured to proxy all traffic directed to a specific virtual host to a local tomcat instance. All is good and fine but for multipart posts larger than ~100kb. Such posts fail on the tomcat end with an exception like SocketTimeoutException. If I connect directly to Tomcat (which listens on a port != 80) then all posts are handled just fine. The Apache virtual host config goes like this: NameVirtualHost * SetOutputFilter DEFLATE <VirtualHost *> ServerName foo.bar.com ErrorLog c:/wamp/logs/foo_error.log CustomLog c:/wamp/logs/foo_access.log combined ProxyTimeout 60 ProxyPass / http://localhost:10080/foo/ ProxyPassReverse / http://localhost:10080/foo/ ProxyPassReverseCookieDomain localhost bar.com ProxyPassReverseCookiePath /foo / </VirtualHost> I tried browsing the Apache2 and mod_proxy docs but found nothing useful. Any idea why Apache2 refuses to proxy requests bigger than X bytes ? Thanks!

    Read the article

  • What advantage to I have if I use 64bit libraries?

    - by RadiantHex
    Hi folks, I see many people go crazy about 64bit libraries, and preferring them in general to the 32bit counter parts. I realise there is a lot of talk that gets lost in translation, and that the 64bit can be often over-valued. The setting is libraries that are called on web application, I'm aware that a new instance of the web app is generated for each hit. Therefore I'm thinking that 64bit is not necessary as the instances in no way surpass 2Gb of RAM usage. Help would be much appreciated! :)

    Read the article

  • What influences the kind of RAM a desktop or laptop can support?

    - by Albert Iordache
    What exactly influences the kind of RAM a desktop or laptop can support? Apart from the clock speed, the maximum amount of RAM the motherboard can handle, the DDR type (1/2/3) and the shape of the module (DIMM for desktops, SO-DIMM for laptops)? I see that in certain cases, such as with the Kingston 4GB DDR3 1333MHz CL9, (and on the Kingston KTD-L3B/4G page) the page displays a set of laptop product numbers. Does the actual model of the laptop also influence the models of RAM it can support? Could, for instance, an Asus K52 work with that particular RAM module, even if it specifies Dell models?

    Read the article

  • Sign of a Good Game

    - by Matt Christian
    (Warning: This post contains spoilers about SILENT HILL 2.  If you haven't played this game, you are dumb) What is one sign of a great game? One of the signs I realized recently is when a game continues to stun and surprise you years and years after you've played and beaten it.  As a major Silent Hill fan, I recently was reminded of Silent Hill 2 and even though see it as one of my favorite Silent Hill games, there are still things I'm learning about it that are neat little additions that add to the atmosphere (atmosphere also makes a great game!). For instance, when you start the game you are given a letter by your wife who has been deceased for years and years.  You are directed to Silent Hill and start treking through hell all by your lonesome (with the exception of a few psychos).  As you continue through the game, pieces of the letter begin to fade and disappear until eventually it is completely non-existent, thus implying the letter was never real and the letter was a delusion you created. Another example is the game's use of imagery the player knows about but might not notice at first.  For me, the most apparent of these was the dress you find near the start when you find the flashlight, which is the same dress you see Mary (your wife) wearing in the flashback sequences.  However, one thing I didn't know was that several deceased bodies you encounter laying around Silent Hill are actually the body of the main character (James) which invokes an idea you've seen that body before but can't pinpoint where... It's amazing to see a game go to such unique lengths to provide a psychological horror game.  Sure, all the dead bodies could be randomly modelled and the dress could be any ol' dress, but just the idea of your brain knowing something deep down but you can't pinpoint it is a really unique idea.  In my opinion, it ties less into subconscious and more into natural tendencies, it taps into the fear hidden inside us all.

    Read the article

  • Why do we need URIs for XML namespaces?

    - by Patryk
    I am trying to figure out why we need URIs for XML namespaces and I cannot find a purpose for that. Can anyone brighten me a little showing their use on a concrete example? EDIT: Ok so for instance: I have this from w3schools <root xmlns:h="http://www.w3.org/TR/html4/" xmlns:f="http://www.w3schools.com/furniture"> <h:table> <h:tr> <h:td>Apples</h:td> <h:td>Bananas</h:td> </h:tr> </h:table> <f:table> <f:name>African Coffee Table</f:name> <f:width>80</f:width> <f:length>120</f:length> </f:table> </root> So what should http://www.w3schools.com/furniture hold ?

    Read the article

  • Tunnel only one program (UDP & TCP) through another server

    - by user136036
    I have a windows machine at home and a server with debian installed. I want to tunnel the UDP traffic from one (any only this) program on my windows machine through my server. For tcp traffic this was easy using putty as a socks5 proxy and then connecting via ssh to my server - but this does not seem to work for UDP. Then I setup dante as a socks5 proxy but it seems to create a new instance/thread per connection which leads to a huge ram usage for my server, so this was no option either. So most people recommend openvpn, so my question: Can I use openvpn to just tunnel this one program through my server? Is there a way to maybe create a local socks5 proxy on my windows machine and set it as a proxy in my program and only this proxy then will use openvpn? Thank you for your ideas

    Read the article

  • What is the simplest way to build your own .deb package?

    - by Calvin Fisher
    Having used Ubuntu for several years now, I've assembled a short list of scripts and packages that I always install on my computers. I would like to pack them up into a .deb to make it easier to get set up on a fresh OS installation. I'm imagining, for instance, one package that would install all of my custom BASH scripts that I've made for common tasks, and another one that would depend on other packages (like w64codecs) that I always install but forget that I need to until I go to do something and it's not there. It doesn't even have to be by-the-book; I'm not looking to deploy these publicly. I'm just looking to roll up all these tasks into one sudo dpkg --install. To quantify "simple" or "easy," I mean to say that I'm looking for the method with the fewest steps requiring the least technical knowledge and, most importantly, taking the least time.

    Read the article

  • DLP projector has odd colors

    - by torbengb
    At my place of work, we have several different video projectors, but they all use DLP technology, and the colors are wrong: for instance, yellow looks more like green, and all other colors are similarly distorted. Any kind of presentation or collaborative work is hindered by these wrong colors. On the laptop screen, the colors are fine but on the projector (hooked up via normal short VGA cable, and showing the same image at the same time), the colors look wrong. This is not about one specific projector or one specific laptop; it seems that any combination of projector + laptop has the exact same problem. Someone said that DLP is poor technology, but that's not true. I'm using a DLP projector at home (regular PC connected via HDMI) and the colors are excellent. Still, there's some kind of curse on the machinery at work. How can we get decent colors?

    Read the article

  • Tuning Red Gate: #5 of Multiple

    - by Grant Fritchey
    In the Tuning Red Gate series I've shown you how to look at a current load on the system and how to drill down to look at historical analysis of the system. I've also shown how you can see the top queries and other information from the current status of the system. I have one more thing I can show you before we need to start fixing things and showing how that affects the data collected, historical moments in time. For example, back in Post #3 I was looking at some spikes in some of the monitored resources that were taking place a couple of weeks back in time. Once I identify a moment in time that I'm interested in, I can go back to the first page of Monitor, Global Overview, and click on the icon: From this you can select the date and time you're interested in. For example, I saw some serious CPU queues last week: This then rolls back the time for all the information that's available to the Global Overview and the drill down to the server and the SQL Server instance there. This then allows me to look at the Top Queries running at this point, sort them by CPU and identify what was potentially the query that was causing the problem right when I saw the CPU queuing This ability to correlate a moment in time with the information available to you in the Analysis window makes for an excellent tool to investigate your systems going backwards in time. It really makes a huge difference in your knowledge. It's not enough to know that something happened at a particular time. You need to know what it was that was occurring. Remember, the key to tuning your systems is having enough knowledge about them. I'll post more on Tuning Red Gate as soon as I can get some queries rewritten. I'm working on that.

    Read the article

  • Oracle Storage Implementation Boot Camp: ZFS Storage Appliance and Flash

    - by mseika
    Oracle Storage Implementation Boot Camp: ZFS Storage Appliance and Flash Thursday 20th September 9.30 – 16.30 This is 1-day, face-to-face training is designed for your Storage Implementation Specialists and will help them in their path to Specialisation, as they prepare for the Storage Implementations Assessments for ZFSSA. Please read carefully the notes below on the required equipment for attendees. Agenda Module 1: Product Overview Module 2: Installation and Configuration ZFS Lab 1: Installation Module 3: Clustering Module 4: File and Data Services ZFS Lab 2: Creating Projects ZFS Lab 3: Creating a Share ZFS Lab 4: Snapshots and Clones ZFS Lab 5: CLI Overview Module 5: Maintenance ZFS Lab 6: Dashboard overview Module 6: Analytics ZFS Lab 7: Analytics Prerequisites for attendees Provide basic administration support for the Solaris OS and/or Windows Desktop/Server OS Understand the fundamentals of data storage administration Understand the fundamentals of Transmission Control Protocol/Internet Protocol (TCP/IP) networking and administration Troubleshoot server and network system software and hardware IMPORTANT: Equipment that attendees will have to bring to the class The attendees must bring their own laptops and have successfully installed the Virtual Box instance and the 7000 Series Simulator. To download Virtual Box and the Simulator click here. Attendees must have the Simulator running in advance of the class. For technical support on the download/installation of the Simulator, please send email to [email protected] Please register here

    Read the article

  • Sql Server 2005 cluster - unable to rename to old server name

    - by Paul2020
    We have a sql 2005 cluster on W2K8 cluster. It is a named instance say SRV1\A. Then I built a new W2K8 (with a diff cluster service name) but the same service account. Then I installed a new sql 2005 cluster say SRV2\A. Now when I bring down the sql server resources on SRV1 and try to rename SRV2\A to SRV1\A through the cluster admin, I get the error the network name already exists. I have tried bringing an old cluster and installing a new cluster with the same name and it works. Why am I not able to rename the name? Any advice would very helpful.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >