Search Results

Search found 5211 results on 209 pages for 'named'.

Page 21/209 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Is there a “P” programming language? [closed]

    - by Synetech
    I’m wondering if anybody has made a programming language based on BCPL, named P. There was a language named B that was based on BCPL, followed of course by C, also based on BCPL. I’ve seen plenty of whimsically named programming languages, so I can’t help but be surprised if nobody made one called P. I checked the Wikipedia’s—not exactly comprehensive—list of programming languages, and while there are three languages named L (none of which are related to BCPL), there are none called P—in fact, it is one of the only letters not used as a name. (Google is useless for one-letter query terms.) Does anybody know if a P has been made, even as a lark. (Yes, I know about P#, but that is based on Prolog, not BCPL; there is one called P, but it is also not related to BCPL.)

    Read the article

  • SQLBrowser will not start

    - by Oliver
    SQL Server 2005 x64 on Windows Server 2003 x64, with multiple instances (default + 2 named). Engineers moved server to a different domain. Since then, cannot get SQLBrowser to start. Still able to query the default instance, and can access named instances by port (TCP:hostname,port#). When on server, can use SSMS to connect to the instances, all is well from that perspective. No errors in the SQL Server logs. As SQLBrowser is starting, an entry in EventViewer.Application says that one of the named instances has an invalid configuration, but I haven't been able to figure out what is invalid. Startup continues, and next message says "The SQLBrowser service was unable to establish SQL instance and connectivity discovery." Next, it enables instance and connectivity discovery support; next, another message about that same named instance having an invalid configuration; then an event says that SQLBrowser has started; last, an event shows the SQLBrowser service has shutdown. I got SQLBrowser to get past the issue with the first named instance by temporarily renaming a registry entry, and now the second named instance can be accessed by name rather than port. Still, cannot access the first named instance by name. Advice?

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user's mailbox [closed]

    - by Cook
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

  • How do I make my internal dns forward requests to a given server

    - by ankimal
    We have a DNS server internally that looks up IP addresses for all internal hosts and connects to root dns servers for all other domains (the rest of the internet). Here is my config options { listen-on port 53 { 127.0.0.1;any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query {192.168.1.0/24; 127.0.0.1; }; recursion yes; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view “internal” { // What the home network will see match-clients { 127.0.0.1;any; }; match-destinations { 127.0.0.1;any; }; recursion yes; zone "." IN { type hint; file "named.ca"; }; include "internal_zones.conf"; }; We need to tweak this to go to our ISPs dns, x.y.z.w instead of the root dns servers if the host cannot be resolved internally. Config: Fedora 10/Bind 9.5.2

    Read the article

  • DNS Server on Fedora 11

    - by Funky Si
    I recently upgraded my Fedora 10 server to Fedora 11 and am getting the following error in my DNS/named config. named[27685]: not insecure resolving 'fedoraproject.org/A/IN: 212.104.130.65#53 This only shows for certain addresses some are resolved fine and I can ping and browse to them fine, while others produce the error above. This is my named.conf file acl trusted-servers { 192.168.1.10; }; options { directory "/var/named"; forwarders {212.104.130.9 ; 212.104.130.65; }; forward only; allow-transfer { 127.0.0.1; }; # dnssec-enable yes; # dnssec-validation yes; # dnssec-lookaside . trust-anchor dlv.isc.org.; }; # Forward Zone for hughes.lan domain zone "funkygoth" IN { type master; file "funkygoth.zone"; allow-transfer { trusted-servers; }; }; # Reverse Zone for hughes.lan domain zone "1.168.192.in-addr.arpa" IN { type master; file "1.168.192.zone"; }; include "/etc/named.dnssec.keys"; include "/etc/pki/dnssec-keys/dlv/dlv.isc.org.conf"; include "/etc/pki/dnssec-keys//named.dnssec.keys"; include "/etc/pki/dnssec-keys//dlv/dlv.isc.org.conf"; Anyone know what I have set wrong here?

    Read the article

  • BIND returns serverfail when querying for its authoriative domain

    - by estol
    Hi there Serverfault folks! First of all: sorry about the title, I had some problem coming up with the proper title. I have a little home server set up, for internet sharing, samba, basic http, dlna mediaserver and what not, and I happend to have a domain at hand, so I thought why not direct it to this computer? I have a BIND 9.8.0 installed, and - afaik - configured it properly. For a few days, the public view did not worked, and I really did not cared, since the local view worked. But now suddenly, even the local view fails. If I try to query the nameserver for anything in my domain, it returns the following error: $ nslookup andromeda.dafaces.com ;; Got SERVFAIL reply from ::1, trying next server ;; Got SERVFAIL reply from ::1, trying next server Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find andromeda.dafaces.com.dafaces.com: SERVFAIL Also, the public view points to the old ip address of the domain, probably because of the same error. Some information about the system: $ uname -a Linux tressis 2.6.37-ARCH #1 SMP PREEMPT Tue Mar 15 09:21:17 CET 2011 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 5000+ AuthenticAMD GNU/Linux $ named -v BIND 9.8.0 And the named.conf file: # cat /etc/named.conf // // /etc/named.conf // include "/etc/rndc.key"; #controls { # inet 127.0.0.1 allow {localhost; } keys { "dnskulcs"; }; #}; options { directory "/var/named"; pid-file "/var/run/named/named.pid"; auth-nxdomain yes; datasize default; // Uncomment these to enable IPv6 connections support // IPv4 will still work: listen-on-v6 { any; }; listen-on { any; }; // Add this for no IPv4: // listen-on { none; }; // Default security settings. // allow-recursion { 127.0.0.1; ::1; 192.168.1.0/24; }; // allow-recursion { any; }; allow-query { any; }; allow-transfer { 127.0.0.1; ::1; 92.243.14.172; 87.98.164.164; 88.191.64.64; }; allow-update { key "dnskulcs"; }; version none; hostname none; server-id none; zone-statistics yes; forwarders { 213.46.246.53; 213.26.246.54; 8.8.8.8; 8.8.4.4; 192.188.242.65; 193.227.196.3; 2001:470:20::2; }; }; view "local" { match-clients { 192.168.1.0/24; 127.0.0.1; ::1; fec0:0:0:ffff::/64; }; recursion yes; zone "localhost" IN { type master; file "localhost.zone"; allow-transfer { any; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "127.0.0.zone"; allow-transfer { any; }; }; zone "." IN { type hint; file "root.hint"; }; zone "dafaces.com" IN { type master; file "internal/dafaces.com.fw"; allow-update { key "dnskulcs"; }; }; zone "1.168.192.in-addr.arpa" IN { type master; file "internal/dafaces.com.rev"; allow-update { key "dnskulcs"; }; }; }; view "public" { match-clients { any;}; recursion no; zone "dafaces.com" IN { type master; file "external/dafaces.com.fw"; allow-transfer { 87.98.164.164; 195.234.42.1; 88.191.64.64; }; }; }; //zone "example.org" IN { // type slave; // file "example.zone"; // masters { // 192.168.1.100; // }; // allow-query { any; }; // allow-transfer { any; }; //}; logging { channel xfer-log { file "/var/log/named.log"; print-category yes; print-severity yes; print-time yes; severity info; }; category xfer-in { xfer-log; }; category xfer-out { xfer-log; }; category notify { xfer-log; }; }; All help would be highly appreciated! EDIT: Zone files: # cat /var/named/internal/dafaces.com.fw $ORIGIN . $TTL 3600 ; 1 hour dafaces.com IN SOA tressis.dafaces.com. postmaster.dafaces.com. ( 2011032201 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 2419200 ; expire (4 weeks) 3600 ; minimum (1 hour) ) NS tressis.dafaces.com. A 192.168.1.1 MX 10 mail.dafaces.com. $ORIGIN _tcp.dafaces.com. _http SRV 0 5 80 www.dafaces.com. _ssh SRV 0 5 22 tressis.dafaces.com. $ORIGIN dafaces.com. acrisius A 192.168.1.230 andromeda A 192.168.1.7 andromeda-win7 CNAME andromeda aspasia A 192.168.1.233 athena A 192.168.1.232 callisto A 192.168.1.102 db A 192.168.1.1 management A 192.168.1.1 ; web management for the router functions haley A 192.168.1.5 hoth A 192.168.1.101 mail A 192.168.1.1 satelite A 192.168.1.20 sony-player A 192.168.1.103 TXT "310f16de2d2712dfc4ae6e5c54f60f828e" torrent A 192.168.1.1 tracker A 192.168.1.1 tressis A 192.168.1.1 www A 192.168.1.1 zeus A 192.168.1.231 and # cat /var/named/external/dafaces.com.fw $ORIGIN . $TTL 3600 dafaces.com IN SOA ns.dafaces.com. postmaster.dafaces.com. ( 2011032405; serial 28800; refresh 7200; retry 2419200; expire 3600; minimum ) NS ns.dafaces.com. NS ns0.xname.org. NS ns1.xname.org. NS ns2.xname.org. A 89.135.129.37 MX 10 mail.dafaces.com. $ORIGIN dafaces.com. ;Szolgaltatasok _ssh._tcp SRV 0 5 22 tressis _http._tcp SRV 0 5 80 www ns A 89.135.129.37 hoth A 89.135.129.37 www A 89.135.129.37 mail A 89.135.129.37 db A 89.135.129.37 torrent A 89.135.129.37 tracker A 89.135.129.37 Edit: Ohh, hell I almost forgot. Since the node is connected to the internet via a residential connection, there is a possibility, that the public ipv4 address will change(but thank god, it is a very rare case), so I daily update the external IP address in the zone file with a shellscript: # cat /etc/cron.daily/dnsupdate #!/bin/sh FILE="/var/named/external/dafaces.com.fw" SERIAL=$(date +%Y%m%d05) PUBLIC_IP=$(ifconfig internet |sed -n "/inet addr:.*255.255.255.255/{s/.*inet addr://; s/ .*//; p}") cat $FILE | sed --posix 's/^.* serial$/\t\t\t\t\t'$SERIAL'; serial/' | sed --posix 's/[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*/'$PUBLIC_IP'/' > /tmp/ujzona mv /tmp/ujzona $FILE /etc/rc.d/named reload

    Read the article

  • Ubuntu 10.04 bind9 local zone include files and apparmor

    - by Gilgongo
    Rather than putting all my zones in one named.conf.local file, I'd like to have them in groups that I can manage as separate files. So, I've tried putting the following into named.conf.local: include "/home/zones/group1.conf"; include "/home/zones/group2.conf"; include "/home/zones/group3.conf"; However, when I restart named, I see "permission denied" errors in the logs. Ubuntu uses apparmor for bind, so I also added the following in /etc/apparmor.d/usr.sbin.named: /home/zones/group1.conf r, /home/zones/group1.conf r, /home/zones/group1.conf r, Now, when I re-start named, all appears to be well. Zones are loaded (I think). However, a day or two later, I see my secondary name server complaining that the primary is telling it that it's not authoritative for those domains. I then have to put all the domains back into the named.conf.local file again. How can I get bind9 to use include files in this way? I don't know much about apparmor, so that may or may not be the issue here, but I've used include files in this way on Debian OK.

    Read the article

  • bind would not work unless allow-query is "any"

    - by adrianTNT
    I have this in /etc/named.conf, I commented the default values and set my own under it. My domain would not load in browser unless I set allow-query to "any", is this OK, what should I edit? If is localhost or 127.0.0.1; 10.0.1.0/24; domain would not load. I tried the 127.. thing because it mentioned it here: http://wiki.mandriva.com/en/Testing:Bind Bind version is 9.7.0-P2-RedHat-9.7.0-5.P2.el6_0.1 OS is CentOS 6.0. options { // listen-on port 53 { 127.0.0.1; }; listen-on port 53 { any; }; //listen-on-v6 port 53 { ::1; }; listen-on-v6 port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; //allow-query { localhost; }; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; };

    Read the article

  • [Python] Tips for making a fraction calculator code more optimized (faster and using less memory)

    - by Logic Named Joe
    Hello Everyone, Basicly, what I need for the program to do is to act a as simple fraction calculator (for addition, subtraction, multiplication and division) for the a single line of input, for example: -input: 1/7 + 3/5 -output: 26/35 My initial code: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': licz = a * d + b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '-': licz= a * d - b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '*': licz= a * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) else: licz= a * d mian= b * c nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) Which I reduced to: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': print("/".join([str((a * d + b * c)/euclid(a * d + b * c, b * d)),str((b * d)/euclid(a * d + b * c, b * d))])) elif wyjscie[1] == '-': print("/".join([str((a * d - b * c)/euclid(a * d - b * c, b * d)),str((b * d)/euclid(a * d - b * c, b * d))])) elif wyjscie[1] == '*': print("/".join([str((a * c)/euclid(a * c, b * d)),str((b * d)/euclid(a * c, b * d))])) else: print("/".join([str((a * d)/euclid(a * d, b * c)),str((b * c)/euclid(a * d, b * c))])) Any advice on how to improve this futher is welcome. Edit: one more thing that I forgot to mention - the code can not make use of any libraries apart from sys.

    Read the article

  • Using Treelist Values to Query a Sitecore Item

    - by kirk.burleson
    I have an item named All Recipes that contains recipes named R1, R2, and R3. I have another item named My Recipes that has a treelist field named Recipes and it contains selected values R2 and R3 from the All Recipes item. The query I'm trying to write is for the Items field of an RSS Feed. What is the query syntax to show the items in All Recipes that appear in the Recipes field of My Recipes?

    Read the article

  • gem install of mongrel

    - by atlantis
    I initiated myself into rails development yesterday. I installed ruby 1.9.1 , rubygems and rails. Running 'gem install mongrel' worked fine and ostensibly installed mongrel too. I am slightly puzzled because script/server starts webrick by default 'which mongrel' returns nothing 'locate mongrel' returns lots of entries like . /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1 /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel . . . /usr/local/bin/mongrel_rails /usr/local/lib/ruby/gems/1.9.1/cache/mongrel-1.1.5.gem /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/evented_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/swiftiplied_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/evented_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/swiftiplied_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5 . . . Does look like I have mongrel installed (both the default installation and my custom install). So why doesn't which mongrel return something. Also trying to reinstall mongrel using 'gem install mongrel' returns throws its own set of exceptions : Building native extensions. This could take a while... ERROR: Error installing mongrel: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb install mongrel checking for main() in -lc... yes creating Makefile make gcc -I. -I/usr/local/include/ruby-1.9.1/i386-darwin9.7.0 -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -D_XOPEN_SOURCE=1 -O2 -g -Wall -Wno-parentheses -fno-common -pipe -fno-common -o http11.o -c http11.c http11.c: In function 'http_field': http11.c:77: error: 'struct RString' has no member named 'ptr' http11.c:77: error: 'struct RString' has no member named 'len' http11.c:77: warning: left-hand operand of comma expression has no effect http11.c:77: warning: statement with no effect http11.c: In function 'header_done': http11.c:172: error: 'struct RString' has no member named 'ptr' http11.c:174: error: 'struct RString' has no member named 'ptr' http11.c:176: error: 'struct RString' has no member named 'ptr' http11.c:177: error: 'struct RString' has no member named 'len' http11.c: In function 'HttpParser_execute': http11.c:298: error: 'struct RString' has no member named 'ptr' http11.c:299: error: 'struct RString' has no member named 'len' make: *** [http11.o] Error 1

    Read the article

  • `gem install mongrel` fails with ruby 1.9.1

    - by atlantis
    I initiated myself into rails development yesterday. I installed ruby 1.9.1, rubygems and rails. Running gem install mongrel worked fine and ostensibly installed mongrel too. I am slightly puzzled because: script/server starts webrick by default which mongrel returns nothing locate mongrel returns lots of entries like /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1 /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel . . . /usr/local/bin/mongrel_rails /usr/local/lib/ruby/gems/1.9.1/cache/mongrel-1.1.5.gem /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/evented_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/swiftiplied_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/evented_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/swiftiplied_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5 . . . Does look like I have mongrel installed (both the default installation and my custom install). So why doesn't which mongrel return something. Also trying to reinstall mongrel using gem install mongrel returns throws its own set of exceptions: Building native extensions. This could take a while... ERROR: Error installing mongrel: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb install mongrel checking for main() in -lc... yes creating Makefile make gcc -I. -I/usr/local/include/ruby-1.9.1/i386-darwin9.7.0 -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -D_XOPEN_SOURCE=1 -O2 -g -Wall -Wno-parentheses -fno-common -pipe -fno-common -o http11.o -c http11.c http11.c: In function 'http_field': http11.c:77: error: 'struct RString' has no member named 'ptr' http11.c:77: error: 'struct RString' has no member named 'len' http11.c:77: warning: left-hand operand of comma expression has no effect http11.c:77: warning: statement with no effect http11.c: In function 'header_done': http11.c:172: error: 'struct RString' has no member named 'ptr' http11.c:174: error: 'struct RString' has no member named 'ptr' http11.c:176: error: 'struct RString' has no member named 'ptr' http11.c:177: error: 'struct RString' has no member named 'len' http11.c: In function 'HttpParser_execute': http11.c:298: error: 'struct RString' has no member named 'ptr' http11.c:299: error: 'struct RString' has no member named 'len' make: *** [http11.o] Error 1

    Read the article

  • BIND9 / DNS Zone / Dedicated Server / Unique Reverse DNS

    - by user2832131
    I locate a dedicated server in a datacenter with no DNS Zone setup. Datacenter panel have 1 textfield only you can fill one Reverse DNS only. According with datacenter instructions here... [instructions]: http://www.wiki.hetzner.de/index.php/DNS-Reverse-DNS/en#How_can_I_assign_several_names_to_my_IP_address.2C_if_different_domains_are_hosted_on_my_server.3F How_can_I_assign_several_names_to_my_IP_address ...I need to install BIND9 in order to configure other records like CNAME and MX. Ok, I've installed BIND9, created a Master Zone. And following this example, I put it in the Zone File: [example]: http://wiki.hetzner.de/index.php/DNS_Zonendatei/en example $ttl 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 1383411730 14400 1800 604800 86400 ) @ IN NS ns1.first-ns.de. @ IN NS robotns2.second-ns.de. @ IN NS robotns3.second-ns.com. localhost IN A 127.0.0.1 @ IN A 144.86.786.651 www IN A 144.86.786.651 loopback IN CNAME localhost But when I point my domain to ns1.first-ns.de, DNS Register says "time out". Am I missing something? I created a Master zone. Should it be a Slave zone? named.conf: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; named.conf.options: options { directory "/var/cache/bind"; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; named.conf.local: zone "mydomain.com" { type master; file "/var/lib/bind/mydomain.com.hosts"; allow-update {any;}; allow-transfer {any;}; allow-query {any;}; }; named.conf.default-zones: zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; Problem is that I'm moving my site, and can't update the new NS server due to a 'timeout' message when filling new datacenter NS. I'm filling: MASTER: ns1.first-ns.de SLAVE1: robotns2.second-ns.de SLAVE2: robotns3.second-ns.com

    Read the article

  • Setting Up My Server to Do DNS On OpenSuse 11.3

    - by adaykin
    Hello, I am attempting to use my server to be a DNS server. I am having trouble getting the domain setup. Here is what I have so far: /var/lib/named/master/andydaykin.com: $TTL 2d @ IN SOA andydaykin.com. root.andydaykin.com. ( 2011011000 ; serial 0 ; refresh 0 ; retry 0 ; expiry 0 ) ; minimum andydaykin.com. IN NS ns1.andydaykin.com. andydaykin.com. IN SOA ns1.andydaykin.com. hostmaster.andydaykin.com. ( @.andydaykin.com. IN NS ns1.andydaykin.com. ns1.andydaykin.com. IN A 204.12.227.33 www.andydaykin.com. IN A 204.12.227.33 /etc/resolve.conf: search andydaykin.com nameserver 204.12.227.33 /etc/named.conf: options { # The directory statement defines the name server's working directory directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; statistics-file "/var/log/named.stats"; listen-on port 53 { 127.0.0.1; }; listen-on-v6 { any; }; notify no; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; include "/etc/named.d/forwarders.conf"; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; Include the meta include file generated by createNamedConfInclude. This includes all files as configured in NAMED_CONF_INCLUDE_FILES from /etc/sysconfig/named include "/etc/named.conf.include"; zone "andydaykin.com" in { file "master/andydaykin.com"; type master; allow-transfer { any; }; }; logging { category default { log_syslog; }; channel log_syslog { syslog; }; }; What I am doing wrong?

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • When should ThreadLocal be used instead of Thread.SetData/Thread.GetData?

    - by Jon Ediger
    Prior to .net 4.0, I implemented a solution using named data slots in System.Threading.Thread. Now, in .net 4.0, there is the idea of ThreadLocal. How does ThreadLocal usage compare to named data slots? Does the ThreadLocal value get inherited by children threads? Is the idea that ThreadLocal is a simplified version of using named data slots? An example of some stuff using named data slots follows. Could this be simplified through use of ThreadLocal, and would it retain the same properties as the named data slots? public static void SetSliceName(string slice) { System.Threading.Thread.SetData(System.Threading.Thread.GetNamedDataSlot(SliceVariable), slice); } public static string GetSliceName(bool errorIfNotFound) { var slice = System.Threading.Thread.GetData(System.Threading.Thread.GetNamedDataSlot(SliceVariable)) as string; if (errorIfNotFound && string.IsNullOrEmpty(slice)) {throw new ConfigurationErrorsException("Server slice name not configured.");} return slice; }

    Read the article

  • Compiling x264 with mp4 support problem

    - by johnas
    I'm trying to compile x264 with mp4 output support. I download the latest version from their git by typing git clone git://git.videolan.org/x264.git When I run ./configure it configures and I'm able to make it. But when i try to configure it with ./configure --enable-mp4-output and then try to make it, it returns a strange error, indicating that there is a compilation error. The error message looks like: ... Lots of similar errors ... output/mp4.c:297: warning: implicit declaration of function ‘gf_isom_add_sample’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘p_file’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘i_track’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘i_descidx’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘p_sample’ output/mp4.c:299: error: ‘mp4_hnd_t’ has no member named ‘p_sample’ output/mp4.c:300: error: ‘mp4_hnd_t’ has no member named ‘i_numframe’ I've tried different releases. I've installed gpac, ffmpeg, and tried numerous tips from the net. But still can't get it to work. The reason I want it with mp4 output is because I want to use ffmpeg to create mp4 files encoded with x264. I'm running Ubuntu Server 9.10 32 bits.

    Read the article

  • Binding a value to one of two possibilities in Guice

    - by Kelvin Chung
    Suppose I have a value for which I have a default, which can be overridden if System.getProperty("foo") is set. I have one module for which I have bindConstant().annotatedWith(Names.named("Default foo")).to(defaultValue); I'm wondering what the best way of implementing a module for which I want to bind something annotated with "foo" to System.getProperty("foo"), or, if it does not exist, the "Default foo" binding. I've thought of a simple module like so: public class SimpleIfBlockModule extends AbstractModule { @Override public void configure() { requireBinding(Key.get(String.class, Names.named("Default foo"))); if (System.getProperties().containsKey("foo")) { bindConstant().annotatedWith(Names.named("foo")).to(System.getProperty("foo")); } else { bind(String.class).annotatedWith(Names.named("foo")).to(Key.get(String.class, Names.named("Default foo"))); } } } I've also considered creating a "system property module" like so: public class SystemPropertyModule extends PrivateModule { @Override public void configure() { Names.bindProperties(binder(), System.getProperties()); if (System.getProperties().contains("foo")) { expose(String.class).annotatedWith(Names.named("foo")); } } } And using SystemPropertyModule to create an injector that a third module, which does the binding of "foo". Both of these seem to have their downsides, so I'm wondering if there is anything I should be doing differently. I was hoping for something that's both injector-free and reasonably generalizable to multiple "foo" attributes. Any ideas?

    Read the article

  • bind9 "error sending response: host unreachable"

    - by wolfgangsz
    of course), I have a number of DNS servers, all running bind9 (9.5.1, to be specific) under fedora. 4 of them are slaves, fed by a common master for our public DNS. These are all located on the public gateways of our various offices. One of them has tons of messages in its log files similar to these: Jul 21 17:26:18 gateway named[3487]: client 10.171.3.8#52500: view internal: error sending response: host unreachable I wonder where that comes from. The firewall is open on port 53 between the two machines (10.171.3.8 is an internal DNS server located on a Windows Domain Controller). The internal domains do NOT list the gateway as a name server (so there should not be any attempts of replicating the domains), and the gateway does not handle any internal DNS. The clients in these messages vary between the two domain controllers on the internal network and a third internal name server (running bind9 on debian in a different segment of the network). Any pointers are highly welcome. In response to the first reply: The issue with this really is that tcpdump doesn't show any problems. Here is an extract from "tcpdump -i any port 53" 09:13:38.283308 IP valine.aminocom.com.61815 ns-pri.ripe.net.domain: 14075 PTR? 166.225.58.95.in-addr.arpa. (44) 09:13:42.007410 IP gateway-eng.aminocom.com.37047 alanine.aminocom.com.domain: 35410+ PTR? 12.3.172.10.in-addr.arpa. (42) At the same time, the DNS log shows: Jul 22 09:13:38 gateway named[3487]: client 10.171.3.6#61300: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.172.3.12#56230: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.171.3.8#55221: view internal: error sending response: host unreachable Jul 22 09:13:49 gateway named[3487]: client 10.171.3.8#51342: view internal: error sending response: host unreachable So clearly at 09:13:40 there were two unsuccessful attempts to connect to internal machines (10.172.3.12 and 10.171.3.8, both are DNS servers), but nothing in the tcpdump output.

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by user38556
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Dynamically updating DNS records with NSUPDATE fails

    - by Thuy
    I've got my own nameserver ns3.epnddns.com and domain epnddns.com I wanted to try and update the records dynamiclly from home using nsupdate but when I run nsupdate -k Kwww.epnddns.com.+157+17183.key i get the following errors Kwww.epnddns.com.+157+17183.key:1: unknown option 'www.epnddns.com.' Kwww.epnddns.com.+157+17183.key:2: unexpected token near end of the file Kwww.epnddns.com.+157+17183.{private,key}: unexpected token Not sure why I get these errors, I'll post my complete setup. Generated keys on my home pc, using dnssec-keygen -a HMAC-MD5 -b 128 -n HOST www.epnddns.com. created /var/named/ and put the keys there and chmod them to 600. transfered the keys to my nameserver ns3.epnddns.com, created /var/named/ ,put the keys there and chmod them to 600 made dnskey.conf in /var/named and added key www.epnddns.com. { algorithm hmac-md5; secret "my secret from they keys=="; }; chmod to 600 then in /etc/bind/named.conf.local include "/var/named/dnskeys.conf"; zone "epnddns.com" { type master; file "/etc/bind/zones/epnddns.com.zone"; allow-transfer { myhomeip; }; //its my home ip so not in the same network allow-update { key www.epnddns.com.; }; }; I restarted bind without any error messages so it seems to be working on the nameserver at least. But on my homepc when i try and run the nsupdate i get those error messages. Thanks in advance for any help or insightful advice.

    Read the article

  • DNS on Redhat - rdnc: no server specified and no default

    - by Syahmul Aziz
    Hi all. The error as shown in the 2 pictures below: The configurations for named.conf and the zones files as shown below: After applying "alveso" suggestion below. Now, I think there is no error but I still can't ping my own domain www.p0864868.com (10.0.0.1) nor can I do host or nslookup as shown on previous pictures. PLease assist. Thank you in advance. I also attached my the changes that I made to my named.conf as well as my resolve.conf configs as shown below: progress 2: turned on logging by typping "rndc queylog" The output as below when I pinged p0864868.com progress 3: changed permission of 10-0-0.zone and p086868.zone to 644 named:named Still can't ping www.p0864868.com or execute host command. It says something like network unreachable. I don't understand why it refer to I don't what address is that.

    Read the article

  • What is the meaning of these BIND log messages?

    - by javano
    Please clarify for me the meaning of the following BIND messages in syslog, these are from a DNS resolver. Whilst I think I understand them, I don't know what all four mean, so I think it's best if someone will clarify for me: 1. Oct 14 18:36:34 resolver1 named[14958]: lame server resolving 'arrivatn.co.uk' (in 'arrivatn.co.uk'?): 212.103.224.56#53 2. Oct 14 18:36:36 resolver1 named[14958]: unexpected RCODE (SERVFAIL) resolving '148.128.183.212.in-addr.arpa/PTR/IN': 212.183.136.42#53 4. Oct 14 18:38:49 resolver1 named[14958]: unexpected RCODE (REFUSED) resolving 'internal-server.ournetwork.com/AAAA/IN': auth.dns.server.ip#53 3. Oct 14 18:39:05 resolver1 named[14958]: client 89.187.127.110#42034: query (cache) 'image.sinajs.cn/A/IN' denied Thank you.

    Read the article

  • lots of dns requests from China, should I worry?

    - by nn4l
    I have turned on dns query logs, and when running "tail -f /var/log/syslog" I see that I get hundreds of identical requests from a single ip address: Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#10856: query: mydomain.de IN ANY + Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#44334: query: mydomain.de IN ANY + Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#15268: query: mydomain.de IN ANY + Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#59597: query: mydomain.de IN ANY + The frequency is about 5 - 10 requests per second, going on for about a minute. After that the same effect repeats from a different IP address. I have now logged about 10000 requests from about 25 ip addresses within just a couple of hours, all of them come from China according to "whois [ipaddr]". What is going on here? Is my name server under attack? Can I do something about this?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >