Search Results

Search found 1229 results on 50 pages for 'origin'.

Page 43/50 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Angular throws "Error: Invalid argument." in IE

    - by przno
    I have a directive which takes element's text and places wbr elements after every 10th character. I'm using it for example on table cells with long text (e.g. URLs), so it does not span over the table. Code of the directive: myApp.directive('myWbr', function ($interpolate) { return { restrict: 'A', link: function (scope, element, attrs) { // get the interpolated text of HTML element var expression = $interpolate(element.text()); // get new text, which has <wbr> element on every 10th position var addWbr = function (inputText) { var newText = ''; for (var i = 0; i < inputText.length; i++) { if ((i !== 0) && (i % 10 === 0)) newText += '<wbr>'; // no end tag newText += inputText[i]; } return newText; }; scope.$watch(function (scope) { // replace element's content with the new one, which contains <wbr>s element.html(addWbr(expression(scope))); }); } }; }); Works fine except in IE (I have tried IE8 and IE9), where it throws an error to the console: Error: Invalid argument. Here is jsFiddle, when clicking on the button you can see the error in console. So obvious question: why is the error there, what is the source of it, and why only in IE? (Bonus question: how can I make IE dev tools to tell me more about error, like the line from source code, because it took me some time to locate it, Error: Invalid argument. does not tell much about the origin.) P.S.: I know IE does not know the wbr at all, but that is not the issue. Edit: in my real application I have re-written the directive to not to look on element's text and modify that, but rather pass the input text via attribute, and works fine now in all browsers. But I'm still curious why the original solution was giving that error in IE, thus starting the bounty.

    Read the article

  • Adding property:value pairs from GooglemapsAPI to an existing json/javascript object

    - by Rockinelle
    I am trying to create a script that finds the closest store/location to a customer using googlemapsAPI. So I've got a json object which is a collection of store objects. Using Jquery's .each to iterate through that object, I am grabbing the driving time from the customer to each store. If it finds the directions, It copies the duration of the drive which is an object with the time in seconds and a human readable value. That appears to work, however when I try to sort through all of those store objects with the drivetime added, I cannot read the object copied from google. If I console.log the whole object, 'closest_store', It shows the values I'm looking for. When I try to read the values directly via closest_store.driveTime, Firebug is outputting undefined in the console. What am I missing here? $.getJSON('<?php echo Url::base() ?>oldservices/get_locations', {}, function(data){ $.each(data, function(index, value) { var end = data[index].address; var request = { origin: start, destination: end, travelMode: google.maps.DirectionsTravelMode.DRIVING }; directionsService.route(request, function(response, status) { //console.log(response); if (status == google.maps.DirectionsStatus.OK) { value.driveTime = response.trips[0].routes[0].duration.value; //console.log(response.trips[0].routes[0].duration.value); }; }); }); var closest_store; $.each(data, function(index, current_store) { if (index == 0) { closest_store = current_store; } else if (current_store.driveTime.value < closest_store.driveTime.value) { closest_store = value }; console.log(current_store.driveTime); } ) });

    Read the article

  • How to speed up drawing of scaled image? Audio playback chokes during window resize.

    - by Paperflyer
    I am writing an audio player for OSX. One view is a custom view that displays a waveform. The waveform is stored as a instance variable of type NSImage with an NSBitmapImageRep. The view also displays a progress indicator (a thick red line). Therefore, it is updated/redrawn every 30 milliseconds. Since it takes a rather long time to recalculate the image, I do that in a background thread after every window resize and update the displayed image once the new image is ready. In the meantime, the original image is scaled to fit the view like this: // The drawing rectangle is slightly smaller than the view, defined by // the two margins. NSRect drawingRect; drawingRect.origin = NSMakePoint(sideEdgeMarginWidth, topEdgeMarginHeight); drawingRect.size = NSMakeSize([self bounds].size.width-2*sideEdgeMarginWidth, [self bounds].size.height-2*topEdgeMarginHeight); [waveform drawInRect:drawingRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1]; The view makes up the biggest part of the window. During live resize, audio starts choking. Selecting the "big" graphic card on my Macbook Pro makes it less bad, but not by much. CPU utilization is somewhere around 20-40% during live resizes. Instruments suggests that rescaling/redrawing of the image is the problem. Once I stop resizing the window, CPU utilization goes down and audio stops glitching. I already tried to disable image interpolation to speed up the drawing like this: [[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone]; That helps, but audio still chokes during live resizes. Do you have an idea how to improve this? The main thing is to prevent the audio from choking.

    Read the article

  • Changing iframe visibility on Mozilla not working

    - by Honski
    I've got a simple iframe that I want hidden until an event from that iframe is fire via easyXDM. It's working fine on Chrome, but somehow not on Mozilla. The event is being fired on both browsers, but only shows the iframe on Chrome. The only way to show the iframe on Mozilla is to call the same code using an onclick event. HTML: <a href="#" onclick="shower(); return false;">show frame (always working)</a> <br> <iframe src="testsite.php" width="600" height="424" scrolling="no" border="0" id="my-iframe3" style="visibility: hidden;" onload="scrollIFrame();" name="my-iframe3"> Javascript: function shower() { document.getElementById("my-iframe3").style.visibility="visible"; } var socket = new easyXDM.Socket( { onMessage: function(message, origin) { if (message == "hi") { socket.postMessage("do something"); } else if (message == "ready") { shower(); } } }); Both browsers receive the "ready" message.

    Read the article

  • i don't know how to solve this error

    - by wide
    in local it works. when i load server, i got this error. Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />).] System.Web.UI.PageTheme.SetStyleSheet() +2458406 System.Web.UI.Page.OnInit(EventArgs e) +8699420 System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +378

    Read the article

  • Rails - Searching multiple textboxes and fields

    - by ChrisWesAllen
    I have a model of events that has various information such as date, location, and description of whats going on. I would like for my users to be able to search through the events list through a set of different textboxes but I having a hard time getting the syntax just right in my view I have... <% form_tag users_path, :method => 'get' do %> (<%= text_field_tag :search_keyword, params[:search_keyword] %>) + (<%= text_field_tag :search_zip, params[:search_zip] %>) <%= submit_tag "Find Events!", :name => nil %> <% end %> and in the controller I'm trying to query through the results.... if params[:search_keyword] @events = Event.find(:all, :conditions => [' name LIKE ? ', "%#{params[:search_keyword]}%"]) elsif params[:search_zip] @events = Event.find(:all, :origin=> params[:search_zip], :within=>50 ) else @events = Event.find(:all) end How do I code it so that it will perform the search only if the textbox isnt empty? also if both textboxes are filled then @events should be the product of BOTH queries? if have no idea if this would work =(???@event = @event+ event.find.....???

    Read the article

  • PHP and use of the Num_Of_Rows() function?

    - by Michael Smith
    Below is some PHP code that i have written, the problem occurs when it gets to the use of the num_of_rows(), it just does not seem to work and i cant figure out why? <?php try { $divMon_ID = array(); $divMon_Position = array(); $divMon_Width = array(); $divMon_Div = array(); $db = new PDO('sqlite:db/EVENTS.sqlite'); $result_mon = $db->query('SELECT * FROM Monday'); $totalRows = mysql_num_rows($result_mon); //for($counter=1; $counter<=10; $counter+=1) //{ //<div id="event_1" style="position:absolute; left: 0px; top:-39px; width:100px; font-family:Arial, Helvetica, sans-serif; font-size:small; border:2px blue solid; height:93px"> //$divMon_ID[]=$row['Id']; //$divMon_Position[]=$row['Origin']; //$divMon_P[]=$row['Position']; //} } catch(PDOException $e) { print 'Exception : '.$e->getMessage(); } ? I know that it is the "$totalRows = mysql_num_rows($result_mon);" statement because when i then comment it out, the page can load. Am i using the function in the wrong way? Thanks.

    Read the article

  • Can't push to git hub

    - by John
    I just completed chapter one of the Ruby on Rails Tutorial by Hartl. Posted about one minor hitch previously. Now I started chapter two. I swear I did everything by the book, but now when I try: git push -u origin master I get the following messages after entering my passphrase: ERROR: repository not found fatal: could not read from remote repository Please make sure you have the correct access rights and that the repository exists. When I down loaded heroku tools I think it installed a second version of ruby on my machine. In any case I now have two version listed under All Programs. Could this have screwed thing up? The two versions are Ruby 1.9.2-p290 and 1.9.3-p327. Also when I open the command prompt using 1.9.2 there is a wierd thing at the top before I do anything: 'C:\Program' is not recognized as an internal or external command, operable program or batch file. This is then followed by the normal prompt on the next line. I'm wondering if the use of my public keys have some how gotten screwed up. Any help would be appreciated.

    Read the article

  • LVM mirror attempt results in "Insufficient free space"

    - by MattK
    Attempting to add a disk to mirror an LVM volume on CentOS 7 always fails with "Insufficient free space: 1 extents needed, but only 0 available". Having searched for a solution, I have tried specifying disks, multiple logging options, adding 3rd log partition, but have not found a solution Not sure if I am making a rookie mistake, or there is something more subtle wrong (I am more familiar with ZFS, new to using LVM): # lvconvert -m1 centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --mirrorlog mirrored --alloc anywhere centos_bi/home /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home /dev/sdi2 /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available The two disks are of the same size, and have identical partition layouts via "sfdisk -d /dev/sdi part_table; sfdisk /dev/sda < part_table". The current configuration is detailed below. # pvs PV VG Fmt Attr PSize PFree /dev/sda1 centos_bi lvm2 a-- 496.00m 496.00m /dev/sda2 centos_bi lvm2 a-- 465.27g 465.27g /dev/sdi2 centos_bi lvm2 a-- 465.27g 0 # vgs VG #PV #LV #SN Attr VSize VFree centos_bi 3 3 0 wz--n- 931.02g 465.75g # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices home centos_bi -wi-ao---- 391.64g /dev/sdi2(6050) root centos_bi -wi-ao---- 50.00g /dev/sdi2(106309) swap centos_bi -wi-ao---- 23.63g /dev/sdi2(0) # pvdisplay --- Physical volume --- PV Name /dev/sdi2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 119109 Free PE 0 Allocated PE 119109 --- Physical volume --- PV Name /dev/sda2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 119109 Free PE 119109 Allocated PE 0 --- Physical volume --- PV Name /dev/sda1 VG Name centos_bi PV Size 500.00 MiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 124 Free PE 124 Allocated PE 0 # vgdisplay --- Volume group --- VG Name centos_bi System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 10 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 3 Act PV 3 VG Size 931.02 GiB PE Size 4.00 MiB Total PE 238342 Alloc PE / Size 119109 / 465.27 GiB Free PE / Size 119233 / 465.75 GiB # lvdisplay --- Logical volume --- LV Path /dev/centos_bi/swap LV Name swap VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:34 -0400 LV Status available # open 2 LV Size 23.63 GiB Current LE 6050 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/centos_bi/home LV Name home VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:35 -0400 LV Status available # open 1 LV Size 391.64 GiB Current LE 100259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Path /dev/centos_bi/root LV Name root VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:37 -0400 LV Status available # open 1 LV Size 50.00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • Secure ldap problem

    - by neverland
    I have tried to config my openldap to have secure connection by using openssl on Debian5. By the way, I got trouble during the below command. ldap:/etc/ldap# slapd -h 'ldap:// ldaps://' -d1 >>> slap_listener(ldaps://) connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_read(15): unable to get TLS client DN, error=49 id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 ber_get_next ber_get_next on fd 15 failed errno=0 (Success) connection_closing: readying conn=7 sd=15 for close connection_close: conn=7 sd=15 Then I have search for "unable to get TLS client DN, error=49 id=7" but it seems no where has a good solution to this yet. Please help. Thanks # Well, I try to fix something to get it work but now I got this ldap:~# slapd -d 256 -f /etc/openldap/slapd.conf @(#) $OpenLDAP: slapd 2.4.11 (Nov 26 2009 09:17:06) $ root@SD6-Casa:/tmp/buildd/openldap-2.4.11/debian/build/servers/slapd could not stat config file "/etc/openldap/slapd.conf": No such file or directory (2) slapd stopped. connections_destroy: nothing to destroy. What should I do now? log : ldap:~# /etc/init.d/slapd start Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldaps:///' -g openldap -u openldap -f /etc/ldap/slapd.conf ldap:~# tail /var/log/messages Feb 8 16:53:27 ldap kernel: [ 123.582757] intel8x0_measure_ac97_clock: measured 57614 usecs Feb 8 16:53:27 ldap kernel: [ 123.582801] intel8x0: measured clock 172041 rejected Feb 8 16:53:27 ldap kernel: [ 123.582825] intel8x0: clocking to 48000 Feb 8 16:53:27 ldap kernel: [ 131.469687] Adding 240932k swap on /dev/hda5. Priority:-1 extents:1 across:240932k Feb 8 16:53:27 ldap kernel: [ 133.432131] EXT3 FS on hda1, internal journal Feb 8 16:53:27 ldap kernel: [ 135.478218] loop: module loaded Feb 8 16:53:27 ldap kernel: [ 141.348104] eth0: link up, 100Mbps, full-duplex Feb 8 16:53:27 ldap rsyslogd: [origin software="rsyslogd" swVersion="3.18.6" x-pid="1705" x-info="http://www.rsyslog.com"] restart Feb 8 16:53:34 ldap kernel: [ 159.217171] NET: Registered protocol family 10 Feb 8 16:53:34 ldap kernel: [ 159.220083] lo: Disabled Privacy Extensions

    Read the article

  • gmail dkim=neutral (no signature)

    - by Bretticus
    After testing much and retracing my steps, I still cannot get google mail to validate. My mail server is Debian 5.0 with exim Exim version 4.72 #1 built 31-Jul-2010 08:12:17 Copyright (c) University of Cambridge, 1995 - 2007 Berkeley DB: Berkeley DB 4.8.24: (August 14, 2009) Support for: crypteq iconv() IPv6 PAM Perl Expand_dlfunc GnuTLS move_frozen_messages Content_Scanning DKIM Old_Demime Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz dnsdb dsearch ldap ldapdn ldapm mysql nis nis0 passwd pgsql sqlite Authenticators: cram_md5 cyrus_sasl dovecot plaintext spa Routers: accept dnslookup ipliteral iplookup manualroute queryprogram redirect Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp Fixed never_users: 0 Size of off_t: 8 GnuTLS compile-time version: 2.4.2 GnuTLS runtime version: 2.4.2 Configuration file is /var/lib/exim4/config.autogenerated My remote smtp transport configuration: remote_smtp: debug_print = "T: remote_smtp for $local_part@$domain" driver = smtp helo_data = mailer.mydomain.com dkim_domain = mydomain.com dkim_selector = mailer dkim_private_key = /etc/exim4/dkim/mailer.mydomain.com.key dkim_canon = relaxed .ifdef REMOTE_SMTP_HOSTS_AVOID_TLS hosts_avoid_tls = REMOTE_SMTP_HOSTS_AVOID_TLS .endif .ifdef REMOTE_SMTP_HEADERS_REWRITE headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE .endif .ifdef REMOTE_SMTP_RETURN_PATH return_path = REMOTE_SMTP_RETURN_PATH .endif .ifdef REMOTE_SMTP_HELO_FROM_DNS helo_data=REMOTE_SMTP_HELO_DATA .endif The path to my private key is correct. I see a DKIM header in my messages as they end up in my gmail account: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mydomain.com; s=mailer; h=Content-Type:MIME-Version:Message-ID:Date:Subject:Reply-To:To:From; bh=nKgQAFyGv<snip>tg=; b=m84lyYvX6<snip>RBBqmW52m1ce2g=; However, gmail headers always report dkim=neutral (no signature): dkim=neutral (no signature) [email protected] My DNS results: dig +short txt mailer._domainkey.mydomain.com mailer._domainkey. mydomain.com descriptive text "v=DKIM1\; k=rsa\; t=y\; p=LS0tLS1CRUdJ<snip>M0RRRUJBUVV" "BQTRHTkFEQ0J<snip>GdLamdaaG" "JwaFZkai93b3<snip>laSCtCYmdsYlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21LNFhaR0NjeENMR0FmOWI4Z<snip>tLQo=" Note that the base64 public key is 364 chars long so I had to break up the key using bind9. $ORIGIN _domainkey. mydomain.com. mailer TXT ("v=DKIM1; k=rsa; t=y; p=LS0tLS1CRUdJTiBQVUJM<snip>U0liM0RRRUJBUVV" "BQTRHTkFEQ0JpUUtCZ1<snip>15MGdLamdaaG" "JwaFZkai93b3lDK21MR<snip>YlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21L<snip>Ci0tLS0tRU5E" "IFBVQkxJQyBLRVktLS0tLQo=") Can anyone point me in the right direction? I would really appreciate it.

    Read the article

  • W2k8, Sybase Driver, Permissions

    - by Clustermagnet
    Trying to get a .net (32bit) app running on a Windows 2008 server. My experience in the Windows world is quite limited. Is this related to the Full/Medium trust settings? Have been Googling for quite some time. Appreciate your feedback! Seeing the following error: Required permissions cannot be acquired. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Security.Policy.PolicyException: Required permissions cannot be acquired. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [PolicyException: Required permissions cannot be acquired.] System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Boolean checkExecutionPermission) +7606467 System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Int32& securitySpecialFlags, Boolean checkExecutionPermission) +57 [FileLoadException: Could not load file or assembly 'Sybase.Data.AseClient, Version=1.155.1000.0, Culture=neutral, PublicKeyToken=26e0f1529304f4a7' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417)] System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +0 System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +43 System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +127 System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +142 System.Reflection.Assembly.Load(String assemblyString) +28 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +46 [ConfigurationErrorsException: Could not load file or assembly 'Sybase.Data.AseClient, Version=1.155.1000.0, Culture=neutral, PublicKeyToken=26e0f1529304f4a7' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417)] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +613 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +203 System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) +105 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +178 System.Web.Compilation.WebDirectoryBatchCompiler..ctor(VirtualDirectory vdir) +163 System.Web.Compilation.BuildManager.BatchCompileWebDirectoryInternal(VirtualDirectory vdir, Boolean ignoreErrors) +53 System.Web.Compilation.BuildManager.BatchCompileWebDirectory(VirtualDirectory vdir, VirtualPath virtualDir, Boolean ignoreErrors) +175 System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) +86 System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) +261 System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) +101 System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean noAssert) +126 System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath(VirtualPath virtualPath, Type requiredBaseType, HttpContext context, Boolean allowCrossApp, Boolean noAssert) +62 System.Web.UI.PageHandlerFactory.GetHandlerHelper(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) +33 System.Web.UI.PageHandlerFactory.GetHandler(HttpContext context, String requestType, String virtualPath, String path) +37 System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +307 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Version Information: Microsoft .NET Framework Version:2.0.50727.4959; ASP.NET Version:2.0.50727.4955

    Read the article

  • Secure ldap problem

    - by neverland
    Hi there, I have tried to config my openldap to have secure connection by using openssl on Debian5. By the way, I got trouble during the below command. ldap:/etc/ldap# slapd -h 'ldap:// ldaps://' -d1 >>> slap_listener(ldaps://) connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_read(15): unable to get TLS client DN, error=49 id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 ber_get_next ber_get_next on fd 15 failed errno=0 (Success) connection_closing: readying conn=7 sd=15 for close connection_close: conn=7 sd=15 Then I have search for "unable to get TLS client DN, error=49 id=7" but it seems no where has a good solution to this yet. Please help. Thanks # Well, I try to fix something to get it work but now I got this ldap:~# slapd -d 256 -f /etc/openldap/slapd.conf @(#) $OpenLDAP: slapd 2.4.11 (Nov 26 2009 09:17:06) $ root@SD6-Casa:/tmp/buildd/openldap-2.4.11/debian/build/servers/slapd could not stat config file "/etc/openldap/slapd.conf": No such file or directory (2) slapd stopped. connections_destroy: nothing to destroy. What should I do now? log : ldap:~# /etc/init.d/slapd start Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldaps:///' -g openldap -u openldap -f /etc/ldap/slapd.conf ldap:~# tail /var/log/messages Feb 8 16:53:27 ldap kernel: [ 123.582757] intel8x0_measure_ac97_clock: measured 57614 usecs Feb 8 16:53:27 ldap kernel: [ 123.582801] intel8x0: measured clock 172041 rejected Feb 8 16:53:27 ldap kernel: [ 123.582825] intel8x0: clocking to 48000 Feb 8 16:53:27 ldap kernel: [ 131.469687] Adding 240932k swap on /dev/hda5. Priority:-1 extents:1 across:240932k Feb 8 16:53:27 ldap kernel: [ 133.432131] EXT3 FS on hda1, internal journal Feb 8 16:53:27 ldap kernel: [ 135.478218] loop: module loaded Feb 8 16:53:27 ldap kernel: [ 141.348104] eth0: link up, 100Mbps, full-duplex Feb 8 16:53:27 ldap rsyslogd: [origin software="rsyslogd" swVersion="3.18.6" x-pid="1705" x-info="http://www.rsyslog.com"] restart Feb 8 16:53:34 ldap kernel: [ 159.217171] NET: Registered protocol family 10 Feb 8 16:53:34 ldap kernel: [ 159.220083] lo: Disabled Privacy Extensions

    Read the article

  • Server freezes at XX:25

    - by Karevan
    We've ordered a 50 euro/month server on hetzner.de, it has debian OS. The problem is that server is freezing in random time of the day and nothing appears in log. Only hardware reboot helps. Part of the log file while it was freezing: Aug 17 22:38:26 Debian-60-squeeze-64-minimal pure-ftpd: ([email protected]) [INFO] New connection from 95.211.120.220 Aug 17 22:38:26 Debian-60-squeeze-64-minimal pure-ftpd: ([email protected]) [INFO] Logout. Aug 17 22:39:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22828]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 17 23:09:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22835]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 17 23:17:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22842]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 17 23:39:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22847]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: imklog 4.6.4, log source = /proc/kmsg started. Aug 18 09:47:47 Debian-60-squeeze-64-minimal rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="1229" x-info="http://www.rsyslog.com"] (re)start Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Initializing cgroup subsys cpuset Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Initializing cgroup subsys cpu Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Linux version 2.6.32-5-amd64 (Debian 2.6.32-45) ([email protected]) (gcc version 4.3.5 (Debian 4.3.5$ Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-2.6.32-5-amd64 root=/dev/md2 ro As you can see, it appears only the fact of starting. No. theres no way to look in server's console right after when it freezes, sadly. Datacenter supporters do not really want to help about that. Server has been installed 30th july, times and dates of freezes are down there: 6 august, 0:25 18 august, 2:27 21 august, 1:25 26 august, 23:26. We decided that freezing around ??:25 isn't a hardware fault, and decided to reinstall the OS. Later, 31 august, our admin backed up all files, reinstalled Debian, and restored the backup. But then, 7 september, server went down again, at 5:05. We thought it was related to Anyone else experiencing high rates of Linux server crashes during a leap second day? and turned ntp off. But then the server went down twice again, 21 september, 17:29 and 24 september, 20:27. I called all linux admins I knew to help with solving it and they said everything is fine about configuring OS and it could be hardware only. But they dont know why it always freezes at XX:25-30. Maybe some of you know about something related to that?

    Read the article

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • How do I combine static and dynamic DHCP leases on a Cisco router?

    - by Brad
    Basically, what I need is super similar to the unanswered cisco forum question below: https://supportforums.cisco.com/message/3139749#3139749 I have a Cisco 850 Series router. I have configured a DHCP pool for the 10.0.0.0/24 network. I have excluded 10.0.0.1 - 10.0.0.99 from the DHCP pool. I want to add a static DHCP pool for stuff and I want DHCP to statically assign them the addresses of my choice below 100. Actually, I don't care what addresses I statically assign. They can be anything in the pool for all I care, I just want it to work. Why are you doing this? Just statically assign the IPs on the devices! I don't want to do this because I have some laptop users. They could obviously only use that static IP here. This isn't a problem if they could be bothered to change any location setting or something. They can't. So it HAS to be DHCP. It also has to be static IPs because I need to forward ports to them. I know, I know, this is weird but it's an apartment LAN/WLAN so this isn't exactly a typical use case. Relevant sections of config below: ip dhcp excluded-address 10.0.0.1 10.0.0.99 ! ip dhcp pool Internal-net import all network 10.0.0.0 255.255.255.0 default-router 10.0.0.1 domain-name 1770.local lease 7 ! ip dhcp pool static-pool import all origin file flash://staticmap default-router 10.0.0.1 domain-name 1770.local Contents of staticmap: *time* Aug 5 2010 09:00 AM *version* 2 !IP address Type Hardware address Lease expiration 10.0.0.100/24 1 001f.5b3e.d50a Infinite *end* You can see here I was trying addresses outside the excluded-address range to see if that would make any difference. My testing machine's MAC: mainframe:~ brad$ ifconfig en1 en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:3e:d5:0a What shows up in the DHCP binding table: basestar#show ip dhcp binding Bindings from all pools not associated with VRF: IP address Client-ID/ Lease expiration Type Hardware address/ User name 10.0.0.112 0100.1f5b.3ed5.0a Aug 12 2010 10:06 AM Automatic What's up with the funny looking MAC in the DHCP binding table?? Is what I'm trying to accomplish basically impossible? Am I going about this the wrong way? All I want to to be able to port forward some ports to specific devices. The way I would do this with a consumer router is to do what I'm trying to do here; assign static DHCP to those devices then configure PAT for ports on those addresses.

    Read the article

  • Why does my mail get marked as spam?

    - by schoen
    I Have the server "afspraakmanager.be". It matches everything not to be a spam server.(it isn't by the way): it has reverse dns, spf,dkim,... . But hotmail marks it as spam. I think the problem is the SPF/DKIM records. when i sent an email to my gmail it says: "Received-SPF: neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=2a02:348:8e:6048::1; Authentication-Results: mx.google.com; spf=neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]; dkim=neutral (bad format) [email protected]" So i guess my SPF and DKIM records aren't set up right. But I also don't have a clue what is wrong with them. this is the zone file: ; zone file for afspraakmanager.be $ORIGIN afspraakmanager.be. $TTL 3600 @ 86400 IN SOA ns1.eurodns.com. hostmaster.eurodns.com. ( 2013102003 ; serial 86400 ; refresh 7200 ; retry 604800 ; expire 86400 ; minimum ) @ 86400 IN NS ns1.eurodns.com. @ 86400 IN NS ns2.eurodns.com. @ 86400 IN NS ns3.eurodns.com. @ 86400 IN NS ns4.eurodns.com. ; Mail Exchanger definition @ 600 IN MX 10 smtp ; IPv4 Address definition @ IN A 37.230.96.72 afspraakmanager.be 600 IN A 37.230.96.72 localhost 86400 IN A 127.0.0.1 smtp 600 IN A 37.230.96.72 www 600 IN A 37.230.96.72 ; Text definition default._domainkey 600 IN TXT "v=DKIM1\\; k=rsa\\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC6pvlZKnbSVXg1Bf3MF2l8xRrKPmqIw2i9Rn1yZ3HEny9qH1vyGXUjdv2O0aQbd5YShSGjtg5H/GedRMLpB0Qb+hBj1yGofOQTdcVtZZfj8qBY5Z7vEkhvtdaogQ0vLjgcwhg0BBuTewEkLxrl9IIzkPMZ1SCtM2Y0RtiUhg2cjQIDAQAB" ; Sender Policy Framework definition afspraakmanager.be 600 IN SPF "v=spf1 a mx ptr +all" The DKIM signature in the header: DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=afspraakmanager.be; s=mail; t=1382361029; bh=4pDpXBY8rCbX8+MfrklZzpQxaUsa3vSPUYjcDR3KAnU=; h=Date:From:To:Subject:From; b=SoBBaAlrueD8qID8txl2SBSqnZgN2lkPCdSPI/m7/YLezIcBedkgIX1NswYiZFl6Z AmF8dES73WUaaJjItVHSrdCJK2mJ/Az+vrgNsyk+GqZZ1YPiIlH3gqRrsguhoofXUX /gqLlqsLxqxkKKd9EbSzKRHuDGlJCLm5SlL8wnL0=

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Hidden DNS master only sending notify to one slave

    - by Rob
    My hidden DNS master is only sending notifies to one of the name servers for a zone I have 3 named servers ns0,ns1 & ns2 all running bind 9.7.3.dfsg-1ubuntu4.1. When an update is processed the master (ns0) seems to behave normally. ns0 (192.168.2.50) zone domain.org/IN: sending notifies (serial 2012060703) client 192.168.2.52#42892: transfer of 'domain.org/IN': AXFR-style IXFR started: TSIG rndc-key client 192.168.2.52#42892: transfer of 'domain.org/IN': AXFR-style IXFR ended ns2 (192.168.2.52) client 192.168.2.50#3762: received notify for zone 'domain.org': TSIG 'rndc-key' zone domain.org/IN: Transfer started. transfer of 'domain.org/IN' from 192.168.2.50#53: connected using 192.168.2.52#55747 zone domain.org/IN: transferred serial 2012060704: TSIG 'rndc-key' transfer of 'domain.org/IN' from 192.168.2.50#53: Transfer completed: 1 messages, 34 records, 1028 bytes, 0.001 secs (1028000 bytes/sec) Nothing happens on ns1. I've turned up the logging level but there's no information in syslog about the actual name servers bind has sent notifications to so I guess this is something it doesn't log. I've also tried watching tcpdump, it never makes any attempt to notify ns1 only ns2 192.168.2.50.56278 > 192.168.2.52.53: [udp sum ok] 56418 notify [b2&3=0x2400] [1a] [1au] ? SOA? domain.org. domain.org. [0s] SOA ns1.domain.net. dnsmaster.domain.net. ? 2012060801 10800 3600 604800 3600 ar: rndc-key. ANY [0s] TSIG hmac-md5.sig-alg.reg.int. fudge=300 maclen=16 origid=56418 error=0 otherlen=0 (174) the authoritive zone has both ns1 and ns2 records $ORIGIN domain.org. $TTL 3h @ IN SOA ns1.domain.net. dnsmaster.domain.net. ( 2012060801 ; Serial yyyymmddnn 3h ; Refresh After 3 hours 1h ; Retry Retry after 1 hour 1w ; Expire after 1 week 1h ) ; Minimum negative caching of 1 hour @ 3600 IN NS ns1.domain.net. @ 3600 IN NS ns2.domain.net. // Edit I have added also-notify {192.168.2.51;192.168.2.52;}; explicitly to the zone file and it all works fine, both ns1 and ns2 get notify messages and transfers succeed. I was under the impression bind would automatically send notifies to all NS records on a zone, maybe it's bugged?

    Read the article

  • Setting font size of Closed Captions on iPhone using ffmpeg or mencoder

    - by forthrin
    Does anyone know how to either: Make ffmpeg set subtitle font size in the output video file Make mencoder produce an iPhone-compatible video file (with subtitles) I finally found out how to get Closed Captions video on iPhone, with mkv and srt files as source material. The secret was using the mov_text subtitle codec in ffmpeg (and turning on Closed Captions in the iPhone settings of course): ffmpeg -y -i in.mkv -i in.srt -map 0:0 -map 0:1 -map 1:0 -vcodec copy -acodec aac -ab 256k -scodec mov_text -strict -2 -metadata title="Title" -metadata:s:s:0 language=eng out.mp4 However, the font size appears very small on the iPhone, and I can't find out how to set it with ffmpeg (the iPhone has no option for this). I found out that mencoder has a -subfont-text-scale option, but I don't have a lot of experience with this program. The following, my best attempt so far, produces an output file which is not playable on the iPhone. sudo port install mplayer +mencoder_extras +osd mencoder in.mkv -sub in.srt -o out.mp4 -ovc copy -oac faac -faacopts br=256:mpeg=4:object=2 -channels 2 -srate 48000 -subfont-text-scale 10 -of lavf -lavfopts format=mp4 PS! As requested, here is the output from mencoder: 192 audio & 400 video codecs success: format: 0 data: 0x0 - 0xb64b9d2f libavformat version 54.6.101 (internal) libavformat file format detected. [matroska,webm @ 0x1015c9a50]Unknown entry 0x80 [lavf] stream 0: video (h264), -vid 0 [lavf] stream 1: audio (ac3), -aid 0, -alang eng VIDEO: [H264] 1280x544 0bpp 49.894 fps 0.0 kbps ( 0.0 kbyte/s) [V] filefmt:44 fourcc:0x34363248 size:1280x544 fps:49.894 ftime:=0.0200 ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.23.100 (internal) AUDIO: 48000 Hz, 2 ch, s16le, 448.0 kbit/29.17% (ratio: 56000->192000) Selected audio codec: [ffac3] afm: ffmpeg (FFmpeg AC-3) ========================================================================== ** MUXER_LAVF ***************************************************************** REMEMBER: MEncoder's libavformat muxing is presently broken and can generate INCORRECT files in the presence of B-frames. Moreover, due to bugs MPlayer will play these INCORRECT files as if nothing were wrong! ******************************************************************************* OK, exit. videocodec: framecopy (1280x544 0bpp fourcc=34363248) VIDEO CODEC ID: 28 AUDIO CODEC ID: 15002, TAG: 0 Writing header... [mp4 @ 0x1015c9a50]Codec for stream 0 does not use global headers but container format requires global headers [mp4 @ 0x1015c9a50]Codec for stream 1 does not use global headers but container format requires global headers Then the following repeats itself for every frame: Pos: 0.0s 1f ( 2%) 0.00fps Trem: 0min 0mb A-V:0.000 [0:0] [mp4 @ 0x1015c9a50]malformated aac bitstream, use -absf aac_adtstoasc Error while writing frame. I recognize -absf aac_adtstoasc as an ffmpeg option (does mencoder spawn ffmpeg?), but I don't know how to pass this option on (my hunch is this is not even the origin of the problem).

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • Group traffic shaping with traffic control?

    - by mmcbro
    I'm trying to limit the output bandwidth generated by an application with linux tc. This application sends me the source port of the request that I use has a filter to limit each user at a given downloadspeed. I feel that my setup could be managed way better if I had a better knowledge of linux tc. At the application level users are categorized as members of a group, each group have a limited bandwidth. Example : Members of group A : 512kbit/s Members of group B : 1Mbit/s Members of group C : 2Mbit/s When a user connects to the application, it retrieves the source port to the origin of the request from the user and sends me the source port and the bandwidth at which the user must be limited depending on group to which it belongs. With these informations I must add the appropriate rules so that the user (the source port in reality) is limited to the right bandwidth. If the user that connect isn't a member of any group it should be limited at a default bandwidth speed. I'm actually managing this by using a self made daemon that add or remove rules from when it receive a request from the application. With my little knowledge of tc I'm not able to limit other users (ones that aren't in a group, all others in fact) at a default speed and my configuration seems awful to me. Here is the base of my tc qdisc and classes : tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbps ceil 125mbps To classify a user at a given speed I have to add one subclass and then associate one filter to it : # a member of group A tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 50001 flowid 1:11 # a member of group A again tc class add dev eth0 parent 1:1 classid 1:12 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 61524 flowid 1:12 # a member of group B again tc class add dev eth0 parent 1:1 classid 1:13 htb rate 1000kbps ceil 1000kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 57200 flowid 1:13 I already know that a source port could be the same if its coming from a different IP address the thing is the application is behind a proxy so I don't have to manage any IP address in that situation. I would like to know how to manage the fact that for all other users (request/source port, whatever you name it) could be limited at a given speed each. I mean that each connection should be able to use at max 100kbit/s for example, not a shared 100kbit/s. I also would like to know if there is a way to simplify my rules. I don't know if it is possible to use only one class per group and associate multiple filters to the same class so each users could be handled by one class and not one class per user. I appreciate any advice, thanks.

    Read the article

  • How to configure apache's mod_proxy_html to work as an ajax proxy?

    - by dcerecedo
    I'm trying to build a web site that let's you view and manipulate data from any page in any other website. To do that, I have to bypass 'Allow Origin' problems: i'm loading the other domain's content in an iframe and i have to manipulate its content with javascript downloaded from my domain. My first attempt was to write a simple proxy myself, requesting the other domains page through a server proxy coded in Java that not only serves the content but rebuilds links (src's and href's) in the content so that the content referenced by these links alse get downloaded through my handmade proxy. The result is not bad but has problems with url's in css and scripts. It's then that i realized that mod_proxy_html is supposed to do exactly all this job. The problem is that i cannot figure out how to make it work as expected. Let's suppose my server runs in my-domain.com and to proxy and transform content from another domain i'd make a request like this: my-domain.com/proxy?url=http://another-domain.com/some/content I'd want mod_proxy_html to serve the content and rewrite following URLs in http://another-domain.com/some/content in the following ways: Absolute URLs not from another-domain.com: no rewritting Relative from root urls:/other/content - /proxy?url=http://another-domain.com/other/content Relative urls: other/content - /proxy?url=http://another-domain.com/some/content/other/content Relative to parent urls: ../other/content - /proxy?url=http://another-domain.com/some/other/content The url should be specified at runtime, not configuration time. Can this be achieved with mod_proxy_html? Could anyone provide a simple working configuration to start with? EDIT 1-First approach The following site config will work fine with sites that use absolute url's everywhere like http://www.huffingtonpost.es/. Youc could try on this config on localhost: http://localhost/asset/http://www.huffingtonpost.es/ <VirtualHost *:80> ServerName localhost LogLevel debug ProxyRequests off RewriteEngine On RewriteRule ^/asset/(.*) $1 [P] ProxyHTMLURLMap $1 /asset/ <Location /asset/> ProxyPassReverse / ProxyHTMLURLMap / /asset/ </Location> </VirtualHost> But as explained in the documentation, if I hit a site using relative url's, I'd like to have these rewritten on the html via mod_proxy_html. So I shoud change the Location block as follows: <Location /asset/> ProxyPassReverse / #Depending on your system use one line or the other #Ubuntu: #SetOutputFilter proxy-html #any other system: ProxyHTMLEnable On ProxyHTMLURLMap / /asset/ </Location> ...which doesn't seem to work. Comments, hints and ideas welcome!

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >