Search Results

Search found 12947 results on 518 pages for 'domain registrar'.

Page 120/518 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • outgoing DNS flood targeted to non-ISP hosts

    - by radudani
    Below is the specific traffic monitored at the network perimeter and originating from a user PC on Vista platform; my question is not about the effects of the flood, but about the nature of the source of it; is this a kind of known infection, or just an application went out of control? a standard NOD32 scan didn't find anything, as the user told me; Thank you for any hint, Danny 14:40:10.115876 IP 192.168.7.42.4122 67.228.0.181.53: S 2742536765:2742536765(0) win 16384 14:40:10.115943 IP 192.168.7.42.4124 67.228.181.207.53: S 3071079888:3071079888(0) win 16384 14:40:10.116015 IP 192.168.7.42.4126 67.228.0.181.53: S 3445199428:3445199428(0) win 16384 14:40:10.116086 IP 192.168.7.42.4128 67.228.181.207.53: S 2053198691:2053198691(0) win 16384 14:40:10.116154 IP 192.168.7.42.4130 67.228.0.181.53: S 2841660872:2841660872(0) win 16384 14:40:10.116222 IP 192.168.7.42.4132 67.228.181.207.53: S 3150822465:3150822465(0) win 16384 14:40:10.116290 IP 192.168.7.42.4134 67.228.0.181.53: S 1692515021:1692515021(0) win 16384 14:40:10.116358 IP 192.168.7.42.4136 67.228.181.207.53: S 3358275919:3358275919(0) win 16384 14:40:10.116430 IP 192.168.7.42.4138 67.228.0.181.53: S 930184999:930184999(0) win 16384 14:40:10.116498 IP 192.168.7.42.4140 67.228.181.207.53: S 1504984630:1504984630(0) win 16384 14:40:10.116566 IP 192.168.7.42.4142 67.228.0.181.53: S 546074424:546074424(0) win 16384 14:40:10.116634 IP 192.168.7.42.4144 67.228.181.207.53: S 4241828590:4241828590(0) win 16384 14:40:10.116702 IP 192.168.7.42.4146 67.228.0.181.53: S 668634627:668634627(0) win 16384 14:40:10.116769 IP 192.168.7.42.4148 67.228.181.207.53: S 3768119461:3768119461(0) win 16384 14:40:10.117360 IP 192.168.7.42.4111 67.228.0.181.53: 12676 op8 Resp12*- [2128q][|domain] 14:40:10.117932 IP 192.168.7.42.4112 67.228.181.207.53: 44190 op7 NotAuth*|$ [29103q],[|domain] 14:40:10.118726 IP 192.168.7.42.4113 67.228.0.181.53: 49196 inv_q [b2&3=0xeea] [64081q] [28317a] [43054n] [23433au] Type63482 (Class 5889)? M-_^OSM-JM-m^_M-i.[|domain] 14:40:10.119934 IP 192.168.7.42.4114 67.228.181.207.53: 48131 updateMA Resp12$ [43850q],[|domain] 14:40:10.121164 IP 192.168.7.42.4115 67.228.0.181.53: 46330 updateM% [b2&3=0x665b] [23691a] [998q] [32406n] [11452au][|domain] 14:40:10.121866 IP 192.168.7.42.4116 67.228.181.207.53: 34425 op7 YXRRSet* [39927q][|domain] 14:40:10.123107 IP 192.168.7.42.4117 67.228.0.181.53: 56536 notify+ [b2&3=0x27e6] [59761a] [23005q] [33341n] [29705au][|domain] 14:40:10.123961 IP 192.168.7.42.4118 67.228.181.207.53: 19323 stat% [b2&3=0x14bb] [32491a] [41925q] [2038n] [5857au][|domain] 14:40:10.132499 IP 192.168.7.42.4119 67.228.0.181.53: 50432 updateMA+ [b2&3=0x6bc2] [10733a] [9775q] [46984n] [15261au][|domain] 14:40:10.133394 IP 192.168.7.42.4120 67.228.181.207.53: 2171 notify Refused$ [26027q][|domain] 14:40:10.134421 IP 192.168.7.42.4121 67.228.0.181.53: 25802 updateM NXDomain*-$ [28641q][|domain] 14:40:10.135392 IP 192.168.7.42.4122 67.228.181.207.53: 2073 updateMA+ [b2&3=0x6d0b] [43177a] [54332q] [17736n] [43636au][|domain] 14:40:10.136638 IP 192.168.7.42.4123 67.228.0.181.53: 15346 updateD+% [b2&3=0x577a] [61686a] [19106q] [15824n] [37833au] Type28590 (Class 64856)? [|domain] 14:40:10.137265 IP 192.168.7.42.4124 67.228.181.207.53: 60761 update+ [b2&3=0x2b66] [43293a] [53922q] [23115n] [11349au][|domain] 14:40:10.148122 IP 192.168.7.42.4125 67.228.0.181.53: 3418 op3% [b2&3=0x1a92] [51107a] [60368q] [47777n] [56081au][|domain]

    Read the article

  • DNS Server Behind NAT

    - by Bryan
    I've got a Bind 9 DNS server sitting behind a NAT firewall, assume the Internet facing IP is 1.2.3.4 There are no restrictions on outgoing traffic, and port 53 (TCP/UDP) is forwarded from 1.2.3.4 to the internal DNS server (10.0.0.1). There are no IP Tables rules on either the VPS or the internal Bind 9 server. From a remote Linux VPS located elsewhere on the internet, nslookup works fine # nslookup foo.example.com 1.2.3.4 Server: 1.2.3.4 Address: 1.2.3.4#53 Name: foo.example.com Addresss: 9.9.9.9 However, when using the host command on the remote VPS, I receive the following output: # host foo.example.com 1.2.3.4 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; connection timed out; no servers could be reached. From the VPS, I can establish a connection (using telnet) to 1.2.3.4:53 From the internal DNS server (10.0.0.1), the host command appears to be fine: # host foo.example.com 127.0.0.1 Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: foo.example.com has address 9.9.9.9 Any suggestions as to why the host command on my VPS is complaining about the reply coming back from another port, and what can I do to fix this? Further info: From a windows host external to the network >nslookup foo.example.com 1.2.3.4 DNS request timeout timeout was 2 seconds Server: UnKnown Address: 1.2.3.4 DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds *** Request to UnKnown timed-out This is a default install of bind from Ubuntu 12.04 LTS, with around 11 zones configured. $ named -v BIND 9.8.1-P1 TCP Dump (filtered) from internal DNS server 20:36:29.175701 IP pc.external.com.57226 > dns.example.com.domain: 1+ PTR? 4.3.2.1.in-addr.arpa. (45) 20:36:29.175948 IP dns.example.com.domain > pc.external.com.57226: 1 Refused- 0/0/0 (45) 20:36:31.179786 IP pc.external.com.57227 > dns.example.com.domain: 2+[|domain] 20:36:31.179960 IP dns.example.com.domain > pc.external.com.57227: 2 Refused-[|domain] 20:36:33.180653 IP pc.external.com.57228 > dns.example.com.domain: 3+[|domain] 20:36:33.180906 IP dns.example.com.domain > pc.external.com.57228: 3 Refused-[|domain] 20:36:35.185182 IP pc.external.com.57229 > dns.example.com.domain: 4+ A? foo.example.com. (45) 20:36:35.185362 IP dns.example.com.domain > pc.external.com.57229: 4*- 1/1/1 (95) 20:36:37.182844 IP pc.external.com.57230 > dns.example.com.domain: 5+ AAAA? foo.example.com. (45) 20:36:37.182991 IP dns.example.com.domain > pc.external.com.57230: 5*- 0/1/0 (119) TCP Dump from client during query 21:24:52.054374 IP pc.external.com.43845 > dns.example.com.53: 6142+ A? foo.example.com. (45) 21:24:52.104694 IP dns.example.com.29242 > pc.external.com.43845: UDP, length 95

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • How do I get my dependenices inject using @Configurable in conjunction with readResolve()

    - by bmatthews68
    The framework I am developing for my application relies very heavily on dynamically generated domain objects. I recently started using Spring WebFlow and now need to be able to serialize my domain objects that will be kept in flow scope. I have done a bit of research and figured out that I can use writeReplace() and readResolve(). The only catch is that I need to look-up a factory in the Spring context. I tried to use @Configurable(preConstruction = true) in conjunction with the BeanFactoryAware marker interface. But beanFactory is always null when I try to use it in my createEntity() method. Neither the default constructor nor the setBeanFactory() injector are called. Has anybody tried this or something similar? I have included relevant class below. Thanks in advance, Brian /* * Copyright 2008 Brian Thomas Matthews Limited. * All rights reserved, worldwide. * * This software and all information contained herein is the property of * Brian Thomas Matthews Limited. Any dissemination, disclosure, use, or * reproduction of this material for any reason inconsistent with the * express purpose for which it has been disclosed is strictly forbidden. */ package com.btmatthews.dmf.domain.impl.cglib; import java.io.InvalidObjectException; import java.io.ObjectStreamException; import java.io.Serializable; import java.lang.reflect.InvocationTargetException; import java.util.HashMap; import java.util.Map; import org.apache.commons.beanutils.PropertyUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.BeanFactory; import org.springframework.beans.factory.BeanFactoryAware; import org.springframework.beans.factory.annotation.Configurable; import org.springframework.util.StringUtils; import com.btmatthews.dmf.domain.IEntity; import com.btmatthews.dmf.domain.IEntityFactory; import com.btmatthews.dmf.domain.IEntityID; import com.btmatthews.dmf.spring.IEntityDefinitionBean; /** * This class represents the serialized form of a domain object implemented * using CGLib. The readResolve() method recreates the actual domain object * after it has been deserialized into Serializable. You must define * &lt;spring-configured/&gt; in the application context. * * @param <S> * The interface that defines the properties of the base domain * object. * @param <T> * The interface that defines the properties of the derived domain * object. * @author <a href="mailto:[email protected]">Brian Matthews</a> * @version 1.0 */ @Configurable(preConstruction = true) public final class SerializedCGLibEntity<S extends IEntity<S>, T extends S> implements Serializable, BeanFactoryAware { /** * Used for logging. */ private static final Logger LOG = LoggerFactory .getLogger(SerializedCGLibEntity.class); /** * The serialization version number. */ private static final long serialVersionUID = 3830830321957878319L; /** * The application context. Note this is not serialized. */ private transient BeanFactory beanFactory; /** * The domain object name. */ private String entityName; /** * The domain object identifier. */ private IEntityID<S> entityId; /** * The domain object version number. */ private long entityVersion; /** * The attributes of the domain object. */ private HashMap<?, ?> entityAttributes; /** * The default constructor. */ public SerializedCGLibEntity() { SerializedCGLibEntity.LOG .debug("Initializing with default constructor"); } /** * Initialise with the attributes to be serialised. * * @param name * The entity name. * @param id * The domain object identifier. * @param version * The entity version. * @param attributes * The entity attributes. */ public SerializedCGLibEntity(final String name, final IEntityID<S> id, final long version, final HashMap<?, ?> attributes) { SerializedCGLibEntity.LOG .debug("Initializing with parameterized constructor"); this.entityName = name; this.entityId = id; this.entityVersion = version; this.entityAttributes = attributes; } /** * Inject the bean factory. * * @param factory * The bean factory. */ public void setBeanFactory(final BeanFactory factory) { SerializedCGLibEntity.LOG.debug("Injected bean factory"); this.beanFactory = factory; } /** * Called after deserialisation. The corresponding entity factory is * retrieved from the bean application context and BeanUtils methods are * used to initialise the object. * * @return The initialised domain object. * @throws ObjectStreamException * If there was a problem creating or initialising the domain * object. */ public Object readResolve() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Transforming deserialized object"); final T entity = this.createEntity(); entity.setId(this.entityId); try { PropertyUtils.setSimpleProperty(entity, "version", this.entityVersion); for (Map.Entry<?, ?> entry : this.entityAttributes.entrySet()) { PropertyUtils.setSimpleProperty(entity, entry.getKey() .toString(), entry.getValue()); } } catch (IllegalAccessException e) { throw new InvalidObjectException(e.getMessage()); } catch (InvocationTargetException e) { throw new InvalidObjectException(e.getMessage()); } catch (NoSuchMethodException e) { throw new InvalidObjectException(e.getMessage()); } return entity; } /** * Lookup the entity factory in the application context and create an * instance of the entity. The entity factory is located by getting the * entity definition bean and using the factory registered with it or * getting the entity factory. The name used for the definition bean lookup * is ${entityName}Definition while ${entityName} is used for the factory * lookup. * * @return The domain object instance. * @throws ObjectStreamException * If the entity definition bean or entity factory were not * available. */ @SuppressWarnings("unchecked") private T createEntity() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Getting domain object factory"); // Try to use the entity definition bean final IEntityDefinitionBean<S, T> entityDefinition = (IEntityDefinitionBean<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Definition", IEntityDefinitionBean.class); if (entityDefinition != null) { final IEntityFactory<S, T> entityFactory = entityDefinition .getFactory(); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via enity definition bean"); return entityFactory.create(); } } // Try to use the entity factory final IEntityFactory<S, T> entityFactory = (IEntityFactory<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Factory", IEntityFactory.class); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via direct look-up"); return entityFactory.create(); } // Neither worked! SerializedCGLibEntity.LOG.warn("Cannot find domain object factory"); throw new InvalidObjectException( "No entity definition or factory found for " + this.entityName); } }

    Read the article

  • Same domain only : smtp; 5.1.0 - Unknown address error 530-'SMTP authentication is required

    - by user124672
    I have a very strange problem after moving my netwave.be domain from WebHost4Life to Arvixe. I configured several email adresses, like [email protected] and [email protected]. For POP3 I can use mail.netwave.be, a mailserver hosted by Arvixe. However, for SMTP I have to use relay.skynet.be. Skynet (Belgacom) is one of the biggest internet providers in Belgium and blocks smtp requests to external mailservers. So for years I've been using relay.skynet.be to send my messages using [email protected] as the sender. The worked perfectly. After moving my domain to Arvixe, this is no longer the case. I can send emails to people, no problem. I have received emails too, so I suspect that's ok too. But I can't send emails from one user of my domain to another user. For example, if I send a mail from [email protected] to [email protected], relay.skynet.be picks up the mail just fine. A few seconds later, I get a 'Delivery Status Notification (Failure)' mail that contains: Reporting-MTA: dns; mailrelay012.isp.belgacom.be Final-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 (permanent failure) Remote-MTA: dns; [69.72.141.4] Diagnostic-Code: smtp; 5.1.0 - Unknown address error 530-'SMTP authentication is required.' (delivery attempts: 0) Like I said, this only seems to be the case when both the sender and recipient are adresses of a domain hosted by Arvixe. I have serveral accounts not related to Arvixe at all. I can use relay.skynet.be to send mail to [email protected] using these accounts. Likewise, I can use relay.skynet.be to send mail from [email protected] to these accounts. but not from one Arvixe account to another. I hope I have clearly outlined the problem and someone will be able to help me.

    Read the article

  • How to re-join an AD2003 domain with Samba after deleting the machine account?

    - by Guss
    During some troubleshooting I deleted the machine account for a Linux server running samba from our AD 2003 domain. We are using Kerberos for authentication, and after I deleted the machine account I tried to join the domain again using net ads join -U Administrator But I keep getting Kerberos errors like these: [2009/08/18 16:14:36, 0] libads/kerberos.c:ads_kinit_password(228) kerberos_kinit_password MACHINE$@MY.DOMAIN.COM failed: Client not found in Kerberos database Failed to join domain: Improperly formed account name It appears as if samba remembers that it once had an account with the AD and keeps trying to reconnect to it, but I want to create a new account from scratch. I tried to delete all the .tdb files I could find as well as everything under /var/cache/samba but to no avail - it still behaves the same. I also tried to create the machine account on the AD side, but then I get a similar error when I try to join, about failure to authenticate with the machine account - it looks like samba tries the previous machine account password and I don't know how to reset it, or even if I could figure out what samba uses - how to set it in the AD. Any help would be greatly appreciated, as at this point the only thing I can think about is to reformat and reinstall the machine, and I would really REALLY love to not do that. Thanks in advance.

    Read the article

  • How to serve Rails application with Passenger/Apache without domain name?

    - by grifaton
    I am trying to serve a Rails application using Passenger and Apache on a Ubuntu server. The Passenger installation instructions say I should add the following to my Apache configuration file - I assume this is /etc/apache2/httpd.conf. <VirtualHost *:80> ServerName www.yourhost.com DocumentRoot /somewhere/public # <-- be sure to point to 'public'! <Directory /somewhere/public> AllowOverride all # <-- relax Apache security settings Options -MultiViews # <-- MultiViews must be turned off </Directory> </VirtualHost> However, I do not yet have a domain pointing at my server, so I'm not sure what I should put for the ServerName parameter. I have tried the IP address, but when I do that, restarting Apache gives apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:26 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:36 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results and pointing the browser at the IP address gives a 500 Internal Server Error. The closest I have got to something sensible is with <VirtualHost efate:80> ServerName efate DocumentRoot /root/jpf/public <Directory /root/jpf/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> where "efate" is my server's host name. But now pointing my browser at the server's IP address just gives a page saying "It works!" - presumably this is a default page, but I'm not sure where this is being served from. I might be wrong in thinking that the reason I have been unable to get this to work is related to not having a domain name. This is the first time I have used Apache directly - any help would be most gratefully received!

    Read the article

  • Can I get a domain controller not to act as DNS for the members?

    - by rsw
    Hi, Let me try to explain my current setup. I have one linux machine acting as DHCP and DNS (dhcpd3 and bind) in my network. This works fine, all computers I hook up to the network gets an IP address and proper DNS servers set. Let's call it 10.12.0.10 However, we also have a Windows Server 2003 Domain Controller in our network to which we add our Windows computers (running XP), let's call it 10.12.0.20. I noticed that when I run 'nslookup' on one of the windows machines, it says that the primary DNS is 10.12.0.20. This have not been much of a problem since: The Windows clients are stationary The Windows server in itself point out my real DHCP/DNS, since I can reach everything specified in it However, this turns out to be a problem when we use Laptops. They connect to the domain here and gets a DNS server, but when the user travels or connect the computer from home, we hit a problem. They are connected to their internet, but their DNS is 10.12.0.20 which they can't reach since they're at home and not at the office network. I solved this by removing the register key called "NameServer" with the value 10.12.0.20, but it gets set again whenever they logon to the domain the next time (when they get back to the office). Can I somehow make the computers take whatever DNS server they are handed when connecting to the internet or a home network, instead of always trying to reach the Domain Controller?

    Read the article

  • Running HTTP and HTTPS connections for a single domain (say, www.example.com) through a Cisco ACE SS

    - by Paddu
    My web application config has a Cisco ACE load balancing across a server farm and I want to use the ACE as an SSL endpoint as well. To make this work, the network architect has come up with a design where all secure pages have to be served from secure.my-domain.com, while non-secure pages are served up from www.my-domain.com. The reason for this is apparently that the configuring the Cisco ACE to accept HTTPS requests on port 443 for a particular public IP prevents the simultaneous acceptance of HTTP requests on port 80 for the same IP. While I'm not a networking (or Cisco) expert, this seems to be intuitively wrong, as it would prevent any website using the Cisco ACE to serve pages on http://www.my-domain.com and https://www.my-domain.com simultaneously. In this situation, my questions are: Is this truly a limitation of the Cisco ACE when used as an SSL endpoint? If not, then can I assume that we can set up the ACE to accept connections for a particular IP on ports 80 and 443, and function as an SSL endpoint for the incoming requests on 443? Links to appropriate documentation most welcome here. Assuming the setup in the previous question, can I then redirect both sets of requests to the same server farm on the same port?

    Read the article

  • What is the correct authentication mechanism when there are users inside and outside the domain?

    - by Gary Barrett
    We have a Windows 7 enterprise desktop data entry app for mobile (laptop) users with local SQL Express 2008 R2 Express db that syncs data with an SQL Server 2008 R2 Server db. Authentication is required before syncing the data. The existing group of users are part of the organisation's domain so normal scenario and they connect to the Sql Server directly. But there are plans for a second group of app users who belong to various partner organisations so they are outside our domain and have their own various separate domains/accounts. The aim is to deploy the desktop app to them and they will periodically sync data to our SQL Server. What I am uncertain of: Is it possible to authenticate users from another domain? Can permissions be managed via Active Directory etc? Which authentication protocol should be used in this scenario? Windows, Forms, SQL, etc? The IT people are requesting users of the system be managed via Active Directory. Is it possible to manage the external domain users access via Active Directory?

    Read the article

  • If I ssh to a domain provided by dyndns, does my password go through them?

    - by D Connors
    I'm running Ubuntu on my work PC, and my work place provides me with a static IP address but not with a domain. It's sometimes useful for me to connect to that PC through ssh, but it's not common enough for me to instantly remember the IP number. So I set um a dyndns account, and associated a short and intuitive domain name to that IP. Here's my question, when I try to ssh to the domain, it asks me $ ssh [email protected] The authenticity of host 'something.there.foo (xx.xx.xx.xx)' can't be established. RSA key fingerprint is 'ALPHANUMERIC STRING' Are you sure you want to continue connecting (yes/no)? That surprised me a little bit. I have already registered the RSA fingerprint by connecting directly to the IP address. I thought the domain name was simply a convenient way of pointing me in the right direction (i. e. the ip address), but that message makes me think my data is actually going through their servers or something. Which one is it? Am I sending my password through someone else's server? Or is ssh just really really careful, thus warning me even if the final destination is a know host? The ssh server I'm using is the openssh-server package.

    Read the article

  • Single django instance with subdomains for each app in the django project

    - by jwesonga
    I have a django project (django+apache+mod_wsgi+nginx) with multiple apps, I'd like to map each app as a subdomain: project/ app1 (domain.com) app2 (sub1.domain.com) app3 (sub3.domain.com) I have a single .wsgi script serving the project, which is stored in a folder /apache. Below is my vhost file. I'm using a single vhost file instead of separate ones for each sub-domain: <VirtualHost *:8080> ServerAdmin [email protected] ServerName www.domain.com ServerAlias domain.com DocumentRoot /home/path/to/app/ Alias /admin_media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media <Directory /home/path/to/wsgi/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/path/to/logs/apache_error.log CustomLog /home/path/to/logs/apache_access.log combined WSGIDaemonProcess domain.com user=www-data group=www-data threads=25 WSGIProcessGroup domain.com WSGIScriptAlias / /home/path/to/apache/kcdf.wsgi </VirtualHost> <VirtualHost *:8081> ServerAdmin [email protected] ServerName sub1.domain.com ServerAlias sub1.domain.com DocumentRoot /home/path/to/app Alias /admin_media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media <Directory /home/path/to/wsgi/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/path/to/logs/apache_error.log CustomLog /home/path/to/logs/apache_access.log combined WSGIDaemonProcess sub1.domain.com user=www-data group=www-data threads=25 WSGIProcessGroup sub1.domain.com WSGIScriptAlias / /home/path/to/apache/kcdf.wsgi </VirtualHost> My Nginx configuration for the domain.com: server { listen 80; server_name domain.com; access_log off; error_log off; # proxy to Apache 2 and mod_wsgi location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } Configuration for the sub.domain.com: server { listen 80; server_name sub.domain.com; access_log off; error_log off; # proxy to Apache 2 and mod_wsgi location / { proxy_pass http://127.0.0.1:8081/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } This set up doesn't seem to work, everything seems to point to the main domain. I've tried http://effbot.org/zone/django-multihost.htm which kind of worked but seems to have issues with loading my css,images,js files.

    Read the article

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

  • Wildcard subdomain .htaccess and Codeigniter

    - by Gautam
    Hi All, I am trying to create the proper .htaccess that would allow me to map as such: http://domain.com/ --> http://domain.com/home http://domain.com/whatever --> http://domain.com/home/whatever http://user.domain.com/ --> http://domain.com/user http://user.domain.com/whatever --> http://domain.com/user/whatever/ Here, someone would type in the above URLs, however internally, it would be redirecting as if it were the URL on the right. Also the subdomain would be dynamic (that is, http://user.domain.com isn't an actual subdomain but would be a .htaccess rewrite) Also /home is my default controller so no subdomain would internally force it to /home controller and any paths following it (as shown in #2 example above) would be the (catch-all) function within that controller. Like wise if a subdomain is passed it would get passed as a (catch-all) controller along with any (catch-all) functions for it (as shown in #4 example above) Hopefully I'm not asking much here but I can't seem to figure out the proper .htaccess or routing rules (in Codeigniter) for this. httpd.conf and hosts are setup just fine. EDIT #1 Here's my .htaccess that is coming close but is messing up at some point: RewriteEngine On RewriteBase / RewriteCond %{SCRIPT_FILENAME} !-d RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{HTTP_HOST} ^([a-z0-9-]+).domain [NC] RewriteRule (.*) index.php/%1/$1 [QSA] RewriteCond $1 !^(index\.php|images|robots\.txt) RewriteRule ^(.*)$ /index.php/$1 [L,QSA] With the above, when I visit: http://test.domain/abc/123 this is what I notice in $_SERVER var (I've removed some of the fields): Array ( [REDIRECT_STATUS] => 200 [SERVER_NAME] => test.domain [REDIRECT_URL] => /abc/123 [QUERY_STRING] => [REQUEST_URI] => /abc/123 [SCRIPT_NAME] => /index.php [PATH_INFO] => /test/abc/123 [PATH_TRANSLATED] => redirect:\index.php\test\test\abc\123\abc\123 [PHP_SELF] => /index.php/test/abc/123 ) You can see the PATH_TRANSLATED is not properly being formed and I think that may be screwing things up?

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • Should I install an AV product on my domain controller?

    - by mhud
    Should I run a server-specific antivirus, regular antivirus, or no antivirus at all on my servers, particularly my Domain Controllers? Here's some background about why I'm asking this question: I've never questioned that antivirus software should be running on all windows machines, period. Lately I've had some obscure Active Directory related issues that I have tracked down to antivirus software running on our domain controllers. The specific issue was that Symantec Endpoint Protection was running on all domain controllers. Occasionally, our Exchange server triggered a false-positive in Symantec's "Network Threat Protection" on each DC in sequence. After exhausting access to all DCs, Exchange began refusing requests, presumably because it could not communicate with any Global Catalog servers or perform any authentication. Outages would last about ten minutes at a time, and would occur once every few days. It took a long time to isolate the problem because it was not easily reproducible and generally investigation was done after the issue resolved itself.

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • What are the consequences of giving an AD domain differing NetBIOS and DNS names?

    - by Newt
    In the past, when creating AD domains, I've used the common convention of using a sub-domain of the company's publicly registered domain name, e.g "corp.mycompany.com" or "int.mycompany.com". I've always accepted the default NetBIOS name when running DCPromo, for fear that creating a NetBIOS name that differs from the sub-domain may cause complications. I've recently been doing a bit of research on the consequences of providing an alternate NetBIOS name. The main reasons behind this are: The NetBIOS name isn't particularly descriptive or unique to the company Apparently generic NetBIOS names such as "CORP" or "INT" can cause issues when merging IT systems (although I've not had experience with this myself) Providing something "before the slash" that means more to users (less important) In looking at the possible downsides, the only one I can come up with is the disjointed namespace issue when configuring Exchange. Can anybody with more experience than I elaborate on my findings at all? Many thanks

    Read the article

  • Disable "Windows Firewall with Advanced Security" for all profiles(Domain,Public,Standard) in local GP using script help! Windows 7 Clients

    - by JoBo
    We need Windows7 with windows firewall to be turned off , so the GOLD image has windows firewall turned off for all profiles(Domain,Public,Standard) and Windows Service disabled No the same GOLD image deployed with MDT (Apply local GPO) has enabled Windows Firewall under "Windows Firewall with Advanced Security" as part of task sequence Now we need to remove it. "These machines are now on Domain where in we have no rights/control on the domain level GPO", we have local admi rights on these machines We have a requirement do set the "Windows Firewall with Advanced Security" to "NOT Configured" or "OFF "on these machines In gpedit.msc if we manually go to "Windows Firewall with Advanced Security" after enabling Windows Firewall Services then can Clear the settings Do do the same manually on all machines is extra effort Changing values in registry will get reverted on machine restart as its getting applied from local GPO Also using GPMC can connect to remote computer and can manually or using wfw file we can make it not configured but we are looking for a script or a less effort method to accomplish this Please suggest NB: CIA has already reported similar issue//How do I turn off Windows 7 Firewall via script or through automation?// , but doing netsh advfirewall set allprofiles state off on already deployed machines did not make change (FW service on all machine is disabled in GOLd image)// Thanks and Regards Jose

    Read the article

  • Why would someone want to take over control of my domain name?

    - by mike jones
    I was approached by a person wanting to help me set up a website. In order to do this he has requested that I allow him to transfer my domain name to his account, for easier management. I would retain the right of usage and he would pay the bill for maintaining the name. This sounds fishy, but I can't figure out what he hopes to gain if this is a scam. Is this a common practice among 'Administrative Contacts'?

    Read the article

  • Where can I download list of all .com domains registered in the world

    - by John
    I just need registered .com domain names. I know this list is available at: http://www.verisigninc.com/en_US/products-and-services/domain-name-services/grow-your-domain-name-business/tld-zone-access/index.xhtml ( looks like it could take 4 weeks for approval) http://www.premiumdrops.com/zones.html Also I can extract domain names using domain search API at domaintools.com Is there any other source where I can find this list?

    Read the article

  • How can I find out if my domain has been added to email blacklists?

    - by Rob Sobers
    We do a lot of mass emailing of our contacts to promote events, send out newsletters, etc. Some people read and react, some people unsubscribe, but I fear that some might actually mark the email as spam. Is there any way to figure out whether my domain has been added to email blacklists or spam registries? Also, if I use a service like MailChimp to send the emails, how would this work? If one unscrupulous customer was using MailChimp for evil, wouldn't it affect all of their customers?

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • Migrating from .co.uk to .uk [on hold]

    - by DD.
    I currently run the site https://www.example.co.uk and I'm considering migrating to https://www.example.uk to take advantage of the new shorter domain. When migrating the domain in Google Webmaster Tools...will all the authority from the old domain pass if I use 301 redirects? Does anyone know how all the reviews I have collected will get transferred to the new domain when using the AdWords seller ratings? Any other important issues to be wary of when migrating to the new domain?

    Read the article

  • What are the things to look out for when adding other domain user to your IIS?

    - by Jack
    I have a Windows Server 2008 R2 called Jack11 that joined a domain called Watson.org. It has IIS 7 installed. From my understanding, we need to add the following into the web.config file <system.web> <identity impersonate="true" /> </system.web> Also, we need to ensure that the server Jack11 can ping to the domain Watson.org. What other setting do we need to setup in order for a user of domain Watson.org (e.g. the user Watson\User1 to access the application in the Server IIS? This is because currently, there is a problem as follows: Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'WATSON\User1'. The error message was displayed when the user User1 wish to access one of the web application in server Jack11 IIS and that web application also do some retrieval of records from the database, which is installed in SQL Server 2008 Enterprise located in the same server Jack11.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >