Search Results

Search found 245 results on 10 pages for 'authoritative'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • Specific DNS sometimes resolves to wildcard, incorrectly

    - by Mojo
    I have an intermittent problem, and I'm not sure where to start trying to troubleshoot it. In our dev environment, we have two visible IP addresses on load balancers, one to the front-end, and one to a number of back-end service machines. The front-end is configured to take a wildcard DNS name to support generic "portals." dev.example.com A 10.1.1.1 *.dev.example.com CNAME dev.example.com The back-end servers are all specific names within the same space: core.dev.example.com A 10.1.1.2 cms.dev.example.com CNAME core.dev.example.com search.dev.example.com CNAME core.dev.example.com Here's the problem. Periodically a developer or a program trying to reach, say, cms.dev.example.com will get a result that points to the front-end, instead of the back-end load balancer: cms.dev.example.com is an alias to core.dev.example.com core.dev.example.com is an alias to dev.example.com (WRONG!) dev.example.com 10.1.1.1 The developers are all on Mac OS X machines, though I've seen the problem occur on an Ubuntu machine as well, using a local cloud host DNS resolver. Sometimes the developer is using a VPN, which directs the DNS to its own resolver, and sometimes he's on the local net using a DNS resolver assigned by the NAT router. Sometimes clearing the Mac OS X DNS cache, logging into the VPN, then logging out of the VPN, will make the problem go away. The origin authoritative server is on zerigo, and a dig directly to their name servers always seems to give the correct answer. The published DNS cache time for these records is 15 minutes, but the problem has been intermittent for about a week. Any troubleshooting suggestions?

    Read the article

  • Fixed and dynamic IPs in ISC DHPD lead to double lease

    - by GorillaPatch
    I would like to have a small dynamic adress part and the most clients are assigned a fixed IP adress. My dhcpd.conf looks like this: use-host-decl-names on; authoritative; allow client-updates; ddns-updates on; # Einstellungen fuer DHCP leases default-lease-time 3600; max-lease-time 86400; lease-file-name "/var/lib/dhcpd/dhcpd.leases"; subnet 192.168.11.0 netmask 255.255.255.0 { ddns-updates on; pool { # IP range which will be assigned statically range 192.168.11.1 192.168.11.240; deny all clients; } pool { # small dynamic range range 192.168.11.241 192.168.11.254; # used for temporary devices } } group { host pc1 { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 192.168.11.11; } } The motivation for the pool declaration with deny all hosts comes from the ISC DHCPD homepage http://www.isc.org/files/auth.html This will allow hosts to be first added to the network, where they will receive a temporary IP from the 241-254 adress range and then later write an explicit host declaration. Upon next connect it will receive the right configuration. The problem is that I am getting error messages that 192.168.11.13 has a dynamic and a static lease. I am a bit confused as I expected the pool declaration with deny all clients would not count as dynamic. Dynamic and static leases present for 192.168.11.13. Remove host declaration pc1 or remove 192.168.11.13 from the dynamic address pool for 192.168.11.0/24 Is there a way to have the DHCP server send an DHCPNA to clients if they have a host statement and retain this dynamic range?

    Read the article

  • Creating Active Directory on an EC2 box

    - by Chiggins
    So I have Active Directory set up on a Windows Server 2008 Amazon EC2 server. Its set up correctly I think, I never got any errors with it. Just to test that I got it all set up correctly, I have a Windows 7 Professional virtual machine set up on my network to join to AD. I set the VM to use the Active Directory box as its DNS server. I type in my domain to join it, but I get the following error: DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "ad.win.chigs.me": The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.win.chigs.me The following domain controllers were identified by the query: ip-0af92ac4.ad.win.chigs.me However no domain controllers could be contacted. Common causes of this error include: - Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. It seems that I can talk to Active Directory, but when I'm trying to contact the Domain Controller, its giving a private IP to connect to, at least thats what I can make out of it. Here are some nslookup results. > win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Non-authoritative answer: Name: ec2-184-73-35-150.compute-1.amazonaws.com Address: 10.249.42.196 Aliases: win.chigs.me > ad.win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Name: ad.win.chigs.me Address: 10.249.42.196 win.chigs.me and ad.win.chigs.me are CNAME's pointing to my EC2 box. Any idea what I need to do so that I can join my virtual machine to the EC2 Active Directory set up I have? Thanks!

    Read the article

  • DNS stops working occasionally

    - by Andrey
    I have tried using Google DNS and the one provided with DHCP. At some point my PC (Windows 7) stops resolving domain names, but DNS server is perfectly pingable. What can be the reason? Thanks! Edit: It is really weird. It can stop and start working in few seconds. The problem is that DNA requests are timing out, and the problem is that the DNS server is pingable at the same time. I can't understand how this could be possible and what might be an issue. C:\Windows\System32\drivers\etc>nslookup google.com DNS request timed out. timeout was 2 seconds. Server: UnKnown Address: 8.8.8.8 Non-authoritative answer: Name: google.com Addresses: 173.194.78.102 173.194.78.101 173.194.78.139 173.194.78.113 173.194.78.100 173.194.78.138 C:\Windows\System32\drivers\etc>nslookup google.com DNS request timed out. timeout was 2 seconds. Server: UnKnown Address: 8.8.8.8 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to UnKnown timed-out

    Read the article

  • Is USB supported in safe mode on XP?

    - by Hugh Allen
    According Microsoft, "Universal Serial Bus Devices Do Not Work in Safe Mode" under XP. However, in my testing this is incorrect. USB keyboards, mice and flash drives seem to work fine in safe mode (I made sure the BIOS was not providing support). This makes sense because a failure of a standard input device would be, in Microsoft parlance, a "bad user experience". So, Is USB supported in safe mode on XP? If your answer is no (agreeing with Microsoft), please provide a test case, preferably in a virtual machine, where a standard HID keyboard or mouse fails. Please state hardware / BIOS / OS configuration. Note that you will need a PS/2 keyboard attached in addition to your USB device(s) in order to use the boot menu. Virtual machine software usually emulates a PS/2 keyboard. Alternatively, you could add the /safeboot switch to boot.ini. If your answer is yes, please provide a link to some supporting documentation (either from Microsoft or someone authoritative). Your answer might be "devices X, Y and Z are supported but nothing else", in which case also give a link.

    Read the article

  • DNS issue on Fedora 12? wget wordpress.org fails where wget www.google.com works

    - by Tom Auger
    I'm administering a Fedora 12 box, but am quite new to networking specifics. Recently one of our WordPress apps hosted on our server has stopped being able to perform its auto-update or auto-download of plugins. Investigating further, I have tried the following: $ wget wordpress.org --2010-12-17 11:26:50-- http://wordpress.org/ Resolving wordpress.org... failed: Temporary failure in name resolution. wget: unable to resolve host address âwordpress.orgâ Whereas: $ wget www.google.com --2010-12-17 11:27:26-- http://www.google.com/ Resolving www.google.com... 74.125.226.82, 74.125.226.84, 74.125.226.80, ... Connecting to www.google.com|74.125.226.82|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.ca/ [following] --2010-12-17 11:27:26-- http://www.google.ca/ Resolving www.google.ca... 173.194.32.104 Connecting to www.google.ca|173.194.32.104|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: âindex.html.4â [ <=> ] 9,079 --.-K/s in 0.02s 2010-12-17 11:27:26 (462 KB/s) - âindex.html.4â Interestingly: $ ping wordpress.org PING wordpress.org (72.233.56.138) 56(84) bytes of data. 64 bytes from wordpress.org (72.233.56.138): icmp_seq=1 ttl=50 time=81.5 ms 64 bytes from wordpress.org (72.233.56.138): icmp_seq=2 ttl=50 time=67.3 ms ^C --- wordpress.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1783ms rtt min/avg/max/mdev = 67.361/74.448/81.536/7.092 ms and $ nslookup wordpress.org Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: Name: wordpress.org Address: 72.233.56.138 Name: wordpress.org Address: 72.233.56.139 nscd has been stopped and flushed. iptables appear to be clean. At this point I have exhausted my limited abilities to diagnose the issue. Can anyone suggest a resolution path?

    Read the article

  • PTR record not valid for all domains

    - by charnley
    We have an issue sending emails to certain domains, namely Time Warner and Cox. Last week, we decommissioned our Exchange 2003 server and now our Exchange 2010 server is doing all of the transport for our domain. We run our own authoritative name servers, so we are in charge of the DNS and have modified our PTR record to reflect the new server. All mailflow is working except for these 2 domains. When I telnet on port 25 to the mail servers for Cox and Time Warner I am receiving errors. For Cox the error is: 554... rejected - no rDNS And when I telnet to port 25 to the Time Warner mail server we get this: 554 5.7.1 - Connection refused. IP name lookup failed for x.x.x.x I have run through the outbound SMTP test on Microsoft Remote Connectivity Analyzer and get 100% completely successful results. MXToolbox comes up with all successful tests on SMTP as well, showing correct reverse banner check, and no blacklisting. DNSQueries.com shows a valid reverse DNS entry as well for us. Outbound emails to these 2 domains continue to sit in the queue. Any ideas or advice would be greatly appreciated. Thanks!

    Read the article

  • BIND master/slave does not respond for queries for its slave

    - by Savas
    Systems are all Centos 6.2 Lets say I have a masterdns with IP 10.2.1.2, authoritative for the 10.2.1.X subnet and let say it is domain example.com I have another two subnets, 10.2.2.X and 10.1.2.X Each one has its own DNS server, dns2 and dns1 respectively and let say these are domains dom2.example.com and dom2.example.com respectively. The masterdns server has slave zones for dns1, dns2 and respond to requests OK. The dns1, dns2 have the masterdns zones as slaves two, and respond to requests OK. So, the masterdns has as slave zones all the subordinate domains of example.com Each of dns1 and dns2 use masterdns as a forwards (which uses another dns cache/proxy server) for dns resolution of internet public domain names. It works OK that too. The problem is, and I cannot figure it out. Why queries for example at dns1 for hostnames of dom2.example.com do not resolve? If i use nslookup - masterdns at dns1 server, resolve (i use directly the dns facility of masterdns). If I use nslookup locally, meaning queries are sent to dns1, for hosts that are at dom2.example.com, they do not resolve. Everything other works OK.

    Read the article

  • Microsoft Licensing Scenario/Questions [closed]

    - by user17455
    Possible Duplicate: Can you help me with my software licensing question? I am a member of a team developing a third party application (APP) that listens for and services connections from remote devices via TCP. Also, some of these remote devices allow 1 or more users to interact with the remote device. On some of the remote devices, it is impossible for a user to interact with the device. The user/remote device makes no use of any Windows Server service - not DHCP, not IIS, not File Server, not Print Serer, not AD. The remote device's only connection to the Windows Server machine is through the APP's TCP ports. Our company has no interaction with Microsoft. We do not have a Microsoft sales team. Past inquiries have determined that it is cheaper for us to buy Microsoft software (and CALs) retail than to enter into any kind of "arrangement" with Microsoft. I have many questions about SQL Server CALs and Windows Server 2008 CALs. How can I obtain authoritative/legally binding answers? I am not looking for FREE legal advice. I AM looking for FREE advice about who/what/where I can responsibly spend my money to get meaningful information. I fear that passing this on to the local company law firm will just mean that I will be paying them to educate themselves on Microsoft licensing. And if that's like writing code to a new Microsoft API - they are not going to get it right the first time. Going to Microsoft for answers sounds like swimming up to a hungry shark and asking "One leg or two?" I am hoping someone has been down this road before and knows a law firm/lawyer that is experienced in these matters. Any help/suggestion welcome. Thanks.

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • Ubuntu 10.04 bind9 local zone include files and apparmor

    - by Gilgongo
    Rather than putting all my zones in one named.conf.local file, I'd like to have them in groups that I can manage as separate files. So, I've tried putting the following into named.conf.local: include "/home/zones/group1.conf"; include "/home/zones/group2.conf"; include "/home/zones/group3.conf"; However, when I restart named, I see "permission denied" errors in the logs. Ubuntu uses apparmor for bind, so I also added the following in /etc/apparmor.d/usr.sbin.named: /home/zones/group1.conf r, /home/zones/group1.conf r, /home/zones/group1.conf r, Now, when I re-start named, all appears to be well. Zones are loaded (I think). However, a day or two later, I see my secondary name server complaining that the primary is telling it that it's not authoritative for those domains. I then have to put all the domains back into the named.conf.local file again. How can I get bind9 to use include files in this way? I don't know much about apparmor, so that may or may not be the issue here, but I've used include files in this way on Debian OK.

    Read the article

  • PCs on domain can not resolve external IP addresses using the DC's DNS Server

    - by Ben
    I currently have a domain controller which handles all DHCP and DNS. The DHCP works just fine and the domain controller itself can use the internet with no issues. However, PCs that are part of the domain are not able to use external websites, only internal. Does anyone have any way I can solve this issue? Thank you Server: Windows Server 2008 R2 PC: Win7 Enterprise x64 Edit: (domain controller) C:\Users\bcollyer>nslookup google.com Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: google.com Addresses: 2a00:1450:4009:809::100e 173.194.41.166 173.194.41.165 173.194.41.169 173.194.41.162 173.194.41.161 173.194.41.160 173.194.41.168 173.194.41.167 173.194.41.164 173.194.41.163 173.194.41.174 Edit 2: C:\Users\bcollyernetstat -rn Interface List 12...30 85 a9 f7 8a 21 ......Atheros AR8161/8165 PCI-E Gigabit Ethernet Control ler (NDIS 6.20) 1...........................Software Loopback Interface 1 13...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 11...00 00 00 00 00 00 00 e0 Microsoft Teredo Tunneling Adapter IPv4 Route Table Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.16.0.67 172.16.0.202 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 172.16.0.0 255.255.0.0 On-link 172.16.0.202 276 172.16.0.202 255.255.255.255 On-link 172.16.0.202 276 172.16.255.255 255.255.255.255 On-link 172.16.0.202 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 172.16.0.202 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 172.16.0.202 276 Persistent Routes: None IPv6 Route Table Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 1 306 ff00::/8 On-link Persistent Routes: None BTW I have no javascript on the server so can't reply to individual answers... Sorry!

    Read the article

  • Is domain-transfer inherently safe for downtime when the name servers remain the same?

    - by jlmt
    I've been reading around this topic towards understanding whether there's some or no chance of downtime during an upcoming domain transfer for 15 live and very critical domains. In our case there are three companies involved: CompanyA is the original registrar and DNS host, CompanyB is the new DNS host, and CompanyC is the new registrar. I've already changed the nameservers for all domains to those of CompanyB. We suffered some downtime because CompanyA deleted their hosted DNS for our domains directly after the change, but the changes propagated and we're now able to configure our DNS with CompanyB. From what I understand (please correct where wrong!): There exists an SOA record that points oneofourdomains.com to ns.companyb.com. That record is maintained and authoritatively hosted by the ccTLD registry for the domain (eg. Verisign for .com). CompanyA currently has the ability to change the SOA record because they're the registrar. There exist NS records for oneofourdomains.com, which are also related to the link from domain name to nameserver, are similarly hosted by the ccTLD, and which CompanyA are also able to change while acting as registrar. Neither CompanyB nor CompanyC currently have any control over the SOA or NS records. CompanyA are unable to cause us (DNS) problems during the transfer by dropping service early, because they are not the authoritative source for the SOA and NS records. When we transfer the domains, it's administrative control of the SOA and NS records that will be transferred to CompanyC. As long as we advise CompanyC that the SOA and NS records must not change (as regards pointing to CompanyB's nameservers), there's no need for any kind of DNS change, and therefore no possibility of downtime. Is my understanding of this correct? My fear is that CompanyA will somehow cut us off again, and their support dept hasn't given me much confidence in their understanding of the topic.

    Read the article

  • Exchange 2010: Send emails via STMP with custom From address to outside the domain

    - by marsze
    The requirement(s): (1) Connect to Exchange via STMP and (2) basic authentication and send emails with a (3) custom From address to (4) recipients outside the domain. I was able to get (1) - (3) working. I created a dedicated receive connector for this task and configured it like this: Permissions: ms-Exch-SMTP-Accept-Any-Recipient (for authenticated users) ms-Exch-SMTP-Accept-Authoritative-Domain-Sender (for authenticated users) ms-Exch-SMTP-Accept-Any-Sender (for authenticated users) Authentication: TLS Basic Authentication (without TLS) Exchange Server Authentication However, I'm still struggeling with (4): I can send with "fake" From addresses to recipients inside the domain. Also, I can send with the original From address to recipients outside the domain. Can you tell me what I'm missing, to configure Exchange to send emails with changed From addresses to recipients outside the domain? (Or is this even possible at all?) Thanks. UPDATE I have to correct myself: it seems to be working after all. There must be some issue with the mailbox I used for testing. It turned out it's working with other external mailboxes. However, I still have no idea what was different there... Anyways, you can take this as a documentation on how to configure Exchange in such a way ;)

    Read the article

  • Custom attributes in Active Directory - determining usage/function and possible removal options?

    - by HopelessN00b
    I've bumped into a highly-customized Active Directory environment (2003 FL) that's got me wondering if there's any particularly easy way to figure out what a custom attribute's function is, and what, if anything, is "using" that particular attribute. And then what some good options for potentially removing custom attributes from the schema might be. Aside from a restore or starting from scratch. If such an option exists. For example, I think I can be fairly certain what the "isDumbass" attribute with a value of TRUE means, but not so much with "IRPextCONST", containing a value of 393684. Likewise, I'd think it should be pretty safe to delete the "isDumbass" attribute, but would like to a) be sure and b) find out what's querying or updating that value anyway, because I suspect that anything using that attribute might be next on the list of things to remove. Ideally, without having to run a search on the contents of every custom script and bit of source code I can get my hands on, of course. And finally, aside from rebuilding from scratch, or doing an authoritative AD restore from backups that don't exist... is there a way to delete a given custom attribute? (Not blank the value, but actually delete the attribute from the schema - some folks would rather not have attributes like "FaggotMeter" and "DouchebagCounter" hanging around.) I've been able to find and successfully test a method on Windows 2k, but it seems like Microsoft disabled this option in SP4, and the domain in question is a 2003 functional level.

    Read the article

  • Big Data – Beginning Big Data – Day 1 of 21

    - by Pinal Dave
    What is Big Data? I want to learn Big Data. I have no clue where and how to start learning about it. Does Big Data really means data is big? What are the tools and software I need to know to learn Big Data? I often receive questions which I mentioned above. They are good questions and honestly when we search online, it is hard to find authoritative and authentic answers. I have been working with Big Data and NoSQL for a while and I have decided that I will attempt to discuss this subject over here in the blog. In the next 21 days we will understand what is so big about Big Data. Big Data – Big Thing! Big Data is becoming one of the most talked about technology trends nowadays. The real challenge with the big organization is to get maximum out of the data already available and predict what kind of data to collect in the future. How to take the existing data and make it meaningful that it provides us accurate insight in the past data is one of the key discussion points in many of the executive meetings in organizations. With the explosion of the data the challenge has gone to the next level and now a Big Data is becoming the reality in many organizations. Big Data – A Rubik’s Cube I like to compare big data with the Rubik’s cube. I believe they have many similarities. Just like a Rubik’s cube it has many different solutions. Let us visualize a Rubik’s cube solving challenge where there are many experts participating. If you take five Rubik’s cube and mix up the same way and give it to five different expert to solve it. It is quite possible that all the five people will solve the Rubik’s cube in fractions of the seconds but if you pay attention to the same closely, you will notice that even though the final outcome is the same, the route taken to solve the Rubik’s cube is not the same. Every expert will start at a different place and will try to resolve it with different methods. Some will solve one color first and others will solve another color first. Even though they follow the same kind of algorithm to solve the puzzle they will start and end at a different place and their moves will be different at many occasions. It is  nearly impossible to have a exact same route taken by two experts. Big Market and Multiple Solutions Big Data is exactly like a Rubik’s cube – even though the goal of every organization and expert is same to get maximum out of the data, the route and the starting point are different for each organization and expert. As organizations are evaluating and architecting big data solutions they are also learning the ways and opportunities which are related to Big Data. There is not a single solution to big data as well there is not a single vendor which can claim to know all about Big Data. Honestly, Big Data is too big a concept and there are many players – different architectures, different vendors and different technology. What is Next? In this 31 days series we will be exploring many essential topics related to big data. I do not claim that you will be master of the subject after 31 days but I claim that I will be covering following topics in easy to understand language. Architecture of Big Data Big Data a Management and Implementation Different Technologies – Hadoop, Mapreduce Real World Conversations Best Practices Tomorrow In tomorrow’s blog post we will try to answer one of the very essential questions – What is Big Data? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Reference for proper handling of PID file on Unix

    - by bignose
    Where can I find a well-respected reference that details the proper handling of PID files on Unix? On Unix operating systems, it is common practice to “lock” a program (often a daemon) by use of a special lock file: the PID file. This is a file in a predictable location, often ‘/var/run/foo.pid’. The program is supposed to check when it starts up whether the PID file exists and, if the file does exist, exit with an error. So it's a kind of advisory, collaborative locking mechanism. The file contains a single line of text, being the numeric process ID (hence the name “PID file”) of the process that currently holds the lock; this allows an easy way to automate sending a signal to the process that holds the lock. What I can't find is a good reference on expected or “best practice” behaviour for handling PID files. There are various nuances: how to actually lock the file (don't bother? use the kernel? what about platform incompatibilities?), handling stale locks (silently delete them? when to check?), when exactly to acquire and release the lock, and so forth. Where can I find a respected, most-authoritative reference (ideally on the level of W. Richard Stevens) for this small topic?

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • best practice - loging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • C headers: compiler specific vs library specific?

    - by leonbloy
    Is there some clear-cut distinction between standard C *.h header files that are provided by the C compiler, as oppossed to those which are provided by a standard C library? Is there some list, or some standard locations? Motivation: int this answer I got a while ago, regarding a missing unistd.h in the latest TinyC compiler, the author argued that unistd.h (contrarily to sys/unistd.h) should not be provided by the compiler but by your C library. I could not make much sense of that response (for one thing shouldn't that also apply to, say, stdio.h?) but I'm still wondering about it. Is that correct? Where is some authoritative reference for this? Looking in other compilers, I see that other "self contained" POSIX C compilers that are hosted in Windows (like the GCC toolchain that comes with MinGW, in several incarnations; or Digital Mars compiler), include all header files. And in a standard Linux distribution (say, Centos 5.10) I see that the gcc package provides a few header files (eg, stdbool.h, syslimits.h) in /usr/lib/gcc/i386-redhat-linux/4.1.1/include/, and the glibc-headers package provides the majority of the headers in /usr/include/ (including stdio.h, /usr/include/unistd.h and /usr/include/sys/unistd.h). So, in neither case I see support for the above claim.

    Read the article

  • Git rebase and semi-tracked per-developer config files.

    - by dougkiwi
    This is my first SO question and I'm new-ish to Git as well. Background: I am supposed to be the version control guru for Git in my group of about 8 developers. As I don't have a lot of Git experience, this is exciting. I decided we need a shared repository that would be the authoritative master for the production code and the main meeting-point for the development code. As we work for a corporation, we really do need to show an authoritive source for the production code at least. I have instructed the developers to pull-rebase when pulling from the shared repository, then push the commits that they want to share. We have been running into problems with a particular type of file. One of these files, which I currently assume is typical of the problem, is called web.config. We want a version-controlled master web.config for devs to clone, but each dev may make minor edits to this file that they wish to locally save but not share. The problem is this: how do I tell git not to consider local changes or commits to this file to be relevent for rebasing and pushing? Gitignore does not seem to solve the problem, but maybe that's because I put web.config into .gitignore too late? In some simple situations we have stacked local changes, rebased, pushed, and popped the stack, but that doesn't seem to work all of the time. I haven't picked up the pattern quite yet. The published documentation on pull --rebase tends to deal with simplier situations. Or do I have the wrong idea entirely? Are we misusing Git? Dougkiwi

    Read the article

  • What's wrong with output parameters?

    - by Chris McCall
    Both in SQL and C#, I've never really liked output parameters. I never passed parameters ByRef in VB6, either. Something about counting on side effects to get something done just bothers me. I know they're a way around not being able to return multiple results from a function, but a rowset in SQL or a complex datatype in C# and VB work just as well, and seem more self-documenting to me. Is there something wrong with my thinking, or are there resources from authoritative sources that back me up? What's your personal take on this and why? What can I say to colleagues that want to design with output parameters that might convince them to use different structures? EDIT: interesting turn- the output parameter I was asking this question about was used in place of a return value. When the return value is "ERROR", the caller is supposed to handle it as an exception. I was doing that but not pleased with the idea. A coworker wasn't informed of the need to handle this condition and as a result, a great deal of money was lost as the procedure failed silently!

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >