Search Results

Search found 5157 results on 207 pages for 'checking'.

Page 194/207 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Weird XPath behavior in libxml2

    - by Josh K
    I have this XML tree that looks like this (I've changed the tag names but if you're really clever you may figure out what I'm actually doing.) <ListOfThings> <Thing foo:action="add"> <Bar>doStuff --slowly</Bar> <Index>1</Index> </Thing> <Thing foo:action="add"> <Bar>ping yourMother.net</Bar> <Index>2</Index> </Thing> </ListOfThings> With libxml2, I want to programmatically insert a new Thing tag into the ListOfThings with the Index being the highest current index, plus one. I do it like this (sanity checking removed for brevity): xpath = "//urn:myformat[@foo='bar']/" "urn:mysection[@name='baz']/" "urn:ListOfThings/urn:Thing/urn:Index"; xpathObj = xmlXPathEvalExpression(xpath, xpathCtx); nodes = xpathObj->nodesetval; /* Find last value and snarf the value of the tag */ highest = atoi(nodes->nodeTab[nodes->nodeNr - 1]->children->content); snprintf(order, sizeof(order), "%d", highest + 1); /* highest index plus one */ /* now move up two levels.. */ cmdRoot = nodes->nodeTab[nodes->nodeNr - 1]; ASSERT(cmdRoot->parent && cmdRoot->parent->parent); cmdRoot = cmdRoot->parent->parent; /* build the child tag */ newTag = xmlNewNode(NULL, "Thing"); xmlSetProp(newTag, "foo:action", "add"); /* set new node values */ xmlNewTextChild(newTag, NULL, "Bar", command); xmlNewChild(newTag, NULL, "Index", order); /* append this to cmdRoot */ xmlAddChild(cmdRoot, newTag); But if I call this function twice (to add two Things), the XPath expression doesn't catch the new entry I made. Is there a function I need to call to kick XPath in the shins and get it to make sure it really looks over the whole xmlDocPtr again? It clearly does get added to the document, because when I save it, I get the new tags I added. To be clear, the output looks like this: <ListOfThings> <Thing foo:action="add"> <Bar>doStuff --slowly</Bar> <Index>1</Index> </Thing> <Thing foo:action="add"> <Bar>ping yourMother.net</Bar> <Index>2</Index> </Thing> <Thing foo:action="add"> <Bar>newCommand1</Bar> <Index>3</Index> </Thing> <Thing foo:action="add"> <Bar>newCommand2</Bar> <Index>3</Index> <!-- this is WRONG! --> </Thing> </ListOfThings> I used a debugger to check what happened after xmlXPathEvalExpression got called and I saw that nodes->nodeNr was the same each time. Help me, lazyweb, you're my only hope!

    Read the article

  • .NET Declarative Security: Why is SecurityAction.Deny impossible to work with?

    - by rally25rs
    I've been messing with this for about a day and a half now sifting through .NET reflector and MSDN docs, and can't figure anything out... As it stands in the .NET framework, you can demand that the current Principal belong to a role to be able to execute a method by marking a method like this: [PrincipalPermission(SecurityAction.Demand, Role = "CanEdit")] public void Save() { ... } I am working with an existing security model that already has a "ReadOnly" role defined, so I need to do exactly the opposite of above... block the Save() method if a user is in the "ReadOnly" role. No problem, right? just flip the SecurityAction to .Deny: [PrincipalPermission(SecurityAction.Deny, Role = "ReadOnly")] public void Save() { ... } Well, it turns out that this does nothing at all. The method still runs fine. It seems that the PrincipalPermissionAttribute defines: public override IPermission CreatePermission() But when the attribute is set to SecurityAction.Deny, this method is never called, so no IPermission object is ever created. Does anyone know of a way to get .Deny to work? I've been trying to make a custom secutiry attribute, but even that doesn't work. I tried to get tricky and do: public class MyPermissionAttribute : CodeAccessSecurityAttribute { private SecurityAction securityAction; public MyPermissionAttribute(SecurityAction action) : base(SecurityAction.Demand) { if (action != SecurityAction.Demand && action != SecurityAction.Deny) throw new ArgumentException("Unsupported SecurityAction. Only Demand and Deny are supported."); this.securityAction = action; } public override IPermission CreatePermission() { // do something based on the SecurityAction... } } Notice my attribute constructor always passes SecurityAction.Demand, which is the one action that would work previously. However, even in this case, the CreatePermission() method is still only called when the attribute is set to .Demand, and not .Deny! Maybe the runtime is actually checking the attribute instead of the SecurityAction passed to the CodeAccessSecurityAttribute constructor? I'm not sure what else to try here... anyone have any ideas? You wouldn't think it would be that hard to deny method access based on a role, instead of only demanding it. It really disturbed me that the default PrincipalPermission appears from within an IDE like it would be just fine doing a .Deny, and there is like a 1-liner in the MSDN docs that hint that it won't work. You would think the PrincipalPermissionAttribute constructor would throw an exception immediately if anything other that .Demand is specified, since that could create a big security hole! I never would have realized that .Deny does nothing at all if I hadn't been unit testing! Again, all this stems from having to deal with an existing security model that has a "ReadOnly" role that needs to be denied access, instead of doing it the other way around, where I cna just grant access to a role. Thanks for any help! Quick followup: I can actually make my custom attribute work by doing this: public class MyPermissionAttribute : CodeAccessSecurityAttribute { public SecurityAction SecurityAction { get; set; } public MyPermissionAttribute(SecurityAction action) : base(action) { } public override IPermission CreatePermission() { switch(this.SecurityAction) { ... } // check Demand or Deny } } And decorating the method: [MyPermission(SecurityAction.Demand, SecurityAction = SecurityAction.Deny, Role = "ReadOnly")] public void Save() { ... } But that is terribly ugly, since I'm specifying both Demand and Deny in the same attribute. But it does work... Another interesting note: My custom class extends CodeAccessSecurityAttribute, which in turn only extends SecurityAttribute. If I cnage my custom class to directly extend SecurityAttribute, then nothing at all works. So it seems the runtime is definately looking for only CodeAccessSecurityAttribute instances in the metadata, and does something funny with the SecurityAction specified, even if a custom constructor overrides it.

    Read the article

  • LINQ Generic Query with inherited base class?

    - by sah302
    I am trying to write some generic LINQ queries for my entities, but am having issue doing the more complex things. Right now I am using an EntityDao class that has all my generics and each of my object class Daos (such as Accomplishments Dao) inherit it, am example: using LCFVB.ObjectsNS; using LCFVB.EntityNS; namespace AccomplishmentNS { public class AccomplishmentDao : EntityDao<Accomplishment>{} } Now my entityDao has the following code: using LCFVB.ObjectsNS; using LCFVB.LinqDataContextNS; namespace EntityNS { public abstract class EntityDao<ImplementationType> where ImplementationType : Entity { public ImplementationType getOneByValueOfProperty(string getProperty, object getValue) { ImplementationType entity = null; if (getProperty != null && getValue != null) { //Nhibernate Example: //ImplementationType entity = default(ImplementationType); //entity = Me.session.CreateCriteria(Of ImplementationType)().Add(Expression.Eq(getProperty, getValue)).UniqueResult(Of InterfaceType)() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>(); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); } return entity; } public bool insertRow(ImplementationType entity) { if (entity != null) { //Nhibernate Example: //Me.session.Save(entity, entity.Id) //Me.session.Flush() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>().InsertOnSubmit(entity); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); return true; } else { return false; } } } }             I have gotten the insertRow function working, however I am not even sure how to go about doing getOnebyValueOfProperty, the closest thing I could find on this site was: http://stackoverflow.com/questions/2157560/generic-linq-to-sql-query How can I pass in the column name and the value I am checking against generically using my current set-up? It seems like from that link it's impossible since using a where predicate because entity class doesn't know what any of the properties are until I pass them in. Lastly, I need some way of setting a new object as the return type set to the implementation type, in nhibernate (what I am trying to convert from) it was simply this line that did it: ImplentationType entity = default(ImplentationType); However default is an nhibernate command, how would I do this for LINQ? EDIT: getOne doesn't seem to work even when just going off the base class (this is a partial class of the auto generated LINQ classes). I even removed the generics. I tried: namespace ObjectsNS { public partial class Accomplishment { public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) { Accomplishment entity = new Accomplishment(); if (singleOrDefaultClause != null) { LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here entity = lcfdatacontext.Accomplishments.SingleOrDefault(singleOrDefaultClause); lcfdatacontext.Dispose(); } return entity; } } } Get the following error: Error 1 Overload resolution failed because no accessible 'SingleOrDefault' can be called with these arguments: Extension method 'Public Function SingleOrDefault(predicate As System.Linq.Expressions.Expression(Of System.Func(Of Accomplishment, Boolean))) As Accomplishment' defined in 'System.Linq.Queryable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))'. Extension method 'Public Function SingleOrDefault(predicate As System.Func(Of Accomplishment, Boolean)) As Accomplishment' defined in 'System.Linq.Enumerable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean)'. 14 LCF Okay no problem I changed: public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) to: public Accomplishment getOneByWhereClause(Expression<Func<Accomplishment, bool>> singleOrDefaultClause) Error goes away. Alright, but now when I try to call the method via: Accomplishment accomplishment = new Accomplishment(); var result = accomplishment.getOneByWhereClause(x=>x.Id = 4) It doesn't work it says x is not declared. I also tried getOne, and various other Expression =(

    Read the article

  • JavaScript issues

    - by Michael
    My java scirpt is not working right. It is simple pre-vailidation form and I can not get the script to work. It is supposed to validate each field but I can not get it to validate past the first name. I stripped out all of the other garbage so the code would not be confusing Should be a copy paste to notepad. Little help please <?xml version="1.0" encoding="iso-8859-1"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <script language="JavaScript" type="text/javascript"> <!-- function validateForm(theForm) { var name = theForm.firstname.value; var name = theForm.lastname.value; var email = theForm.email.value; if (name == "") { alert("Please fill in your First Name."); theForm.firstname.focus(); return false; } if (name == "") { alert("Please fill in your Last Name."); theForm.lastname.focus(); return false; } if (email == "") { alert("Please fill in your email address."); theForm.email.focus(); return false; } return true; } //--> </script> if (!theForm.myCheckbox1.checked { alert("Please check the honor box."); return false; } </head> <body> </script> <fieldset> <legend>Fun in the Sun With JavaScirpt</legend> <ul> <form action="blah.cgi" method="post" onSubmit="return validateForm(this);"> First name: <input type="text" name="firstname"> <font color="#FF0000" size="1" face="Arial, Helvetica, sans-serif"><strong>*</strong></font> <br><br> Last name: <input type="text" name="lastname"> <font color="#FF0000" size="1" face="Arial, Helvetica, sans-serif"><strong>*</strong></font> <br><br> Email address: <input type="text" name="email"> <font color="#FF0000" size="1" face="Arial, Helvetica, sans-serif"><strong>*</strong></font> <br><br> Phone Number: <input type="text" name="phone"><br><br> <input type="submit" name="submit" value="Submit"> </form> <input type="checkbox" name="myCheckbox" value="someValue"><font color="#FF0000" size="1" face="Arial, Helvetica, sans-serif"><strong>*</strong></font> <P>By checking this Box you are confirming the data is accurate</p> <p>(* indicates a required field)</p> </body> </html>

    Read the article

  • HTML / PHP form

    - by user317128
    I am trying to code an all in one HTML/PHP contact from with error checking. When I load this file in my browser there is not HTML. I am a newb php programmer so most likely forgot something pretty obvious. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>All In One Feedback Form</title> </head> <body> <? $form_block = " <input type=\"hidden\" name=\"op\" value=\"ds\"> <form method=\"post\" action=\"$server[PHP_SELF]\"> <p>Your name<br /> <input name=\"sender_name\" type=\"text\" size=30 value=\"$_POST[sender_name]\" /></p> <p>Email<br /> <input name=\"sender_email\" type=\"text\" size=30 value=\"$_POST[sender_email]\"/></p> <p>Message<br /> <textarea name=\"message\" cols=30 rows=5 value=\"$_POST[message]\"></textarea></p> <input name=\"submit\" type=\"submit\" value=\"Send This Form\" /> </form>"; if ($_POST[op] != "ds") { //they see this form echo "$form_block"; } else if ($_POST[op] == "ds") { if ($_POST[sender_name] == "") { $name_err = "Please enter your name<br>"; $send = "no"; } if ($_POST[sender_email] == "ds") { $email_err = "Please enter your email<br>"; $send = "no"; } if ($_POST[message] == "ds") { $message_err = "please enter message<br>"; $send = "no"; } if ($send != "no") { //its ok to send $to = "[email protected]"; $subject = "All in one web site feed back"; $mailheaders = "From: website <some email [email protected]> \n"; $mailheaders .= "Reply-To: $_POST[sender_email]\n"; $msg = "Email sent from this site www.ccccc.com\n"; $msg .= "Senders name: $_POST[senders_name]\n"; $msg .= "Sender's E-Mail: $_POST[sender_email]\n"; $msg .= "Message: $_POST[message]\n\n"; mail($to, $subject, $msg, $mailheaders); echo "<p>Mail has been sent</p>"; } else if ($send == "no") { echo "$name_err"; echo "$email_err"; echo "$message_err"; echo "$form_block"; } } ?> </body> </html> FYI I am trying the example from a book named PHP 6 Fast and Easy Wed Development

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Expalin the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Change this program appropriately so that it can run correctly on this machine.

    Read the article

  • HTML / PHP from

    - by user317128
    I am trying to code an all in one HTML/PHP contact from with error checking. When I load this file in my browser there is not HTML. I am a newb php programmer so most likely forgot something pretty obvious. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>All In One Feedback Form</title> </head> <body> <? $form_block = " <input type=\"hidden\" name=\"op\" value=\"ds\"> <form method=\"post\" action=\"$server[PHP_SELF]\"> <p>Your name<br /> <input name=\"sender_name\" type=\"text\" size=30 value=\"$_POST[sender_name]\" /></p> <p>Email<br /> <input name=\"sender_email\" type=\"text\" size=30 value=\"$_POST[sender_email]\"/></p> <p>Message<br /> <textarea name=\"message\" cols=30 rows=5 value=\"$_POST[message]\"></textarea></p> <input name=\"submit\" type=\"submit\" value=\"Send This Form\" /> </form>"; if ($_POST[op] != "ds") { //they see this form echo "$form_block"; } else if ($_POST[op] == "ds") { if ($_POST[sender_name] == "") { $name_err = "Please enter your name<br>"; $send = "no"; } if ($_POST[sender_email] == "ds") { $email_err = "Please enter your email<br>"; $send = "no"; } if ($_POST[message] == "ds") { $message_err = "please enter message<br>"; $send = "no"; } if ($send != "no") { //its ok to send $to = "[email protected]"; $subject = "All in one web site feed back"; $mailheaders = "From: website <some email [email protected]> \n"; $mailheaders .= "Reply-To: $_POST[sender_email]\n"; $msg = "Email sent from this site www.ccccc.com\n"; $msg .= "Senders name: $_POST[senders_name]\n"; $msg .= "Sender's E-Mail: $_POST[sender_email]\n"; $msg .= "Message: $_POST[message]\n\n"; mail($to, $subject, $msg, $mailheaders); echo "<p>Mail has been sent</p>"; } else if ($send == "no") { echo "$name_err"; echo "$email_err"; echo "$message_err"; echo "$form_block"; } } ?> </body> </html> FYI I am trying the example from a book named PHP 6 Fast and Easy Wed Development

    Read the article

  • Does anyone know how to appropriately deal with user timezones in rails 2.3?

    - by Amazing Jay
    We're building a rails app that needs to display dates (and more importantly, calculate them) in multiple timezones. Can anyone point me towards how to work with user timezones in rails 2.3(.5 or .8) The most inclusive article I've seen detailing how user time zones are supposed to work is here: http://wiki.rubyonrails.org/howtos/time-zones... although it is unclear when this was written or for what version of rails. Specifically it states that: "Time.zone - The time zone that is actually used for display purposes. This may be set manually to override config.time_zone on a per-request basis." Keys terms being "display purposes" and "per-request basis". Locally on my machine, this is true. However on production, neither are true. Setting Time.zone persists past the end of the request (to all subsequent requests) and also affects the way AR saves to the DB (basically treating any date as if it were already in UTC even when its not), thus saving completely inappropriate values. We run Ruby Enterprise Edition on production with passenger. If this is my problem, do we need to switch to JRuby or something else? To illustrate the problem I put the following actions in my ApplicationController right now: def test p_time = Time.now.utc s_time = Time.utc(p_time.year, p_time.month, p_time.day, p_time.hour) logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect logger.error p_time.inspect logger.error s_time.inspect jl = JunkLead.create! jl.date_at = s_time logger.error s_time.inspect logger.error jl.date_at.inspect jl.save! logger.error s_time.inspect logger.error jl.date_at.inspect render :nothing => true, :status => 200 end def test2 Time.zone = 'Mountain Time (US & Canada)' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end def test3 Time.zone = 'UTC' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end and they yield the following: Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:50) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:15:50 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 21ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test2 (for 98.202.196.203 at 2010-12-24 22:15:53) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Completed in 143ms (View: 1, DB: 3) | 200 OK [http://www.dealsthatmatter.com/test2] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:59) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Fri Dec 24 22:15:59 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Completed in 20ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test3 (for 98.202.196.203 at 2010-12-24 22:16:03) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Completed in 17ms (View: 0, DB: 2) | 200 OK [http://www.dealsthatmatter.com/test3] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:16:04) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:16:05 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 151ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] It should be clear above that the 2nd call to /test shows Time.zone set to Mountain, even though it shouldn't. Additionally, checking the database reveals that the test action when run after test2 saved a JunkLead record with a date of 2010-12-22 15:00:00, which is clearly wrong.

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • CrossPage PostBack in series of web pages

    - by Vishnu Reddy
    I had a requirement to pass data between pages in my application. I have a Page A where user will input some data and on submit User will be redirected to Page B where again user will enter some more data and on submitting user will be show a confirmation in Page C after doing some calculations and data saving. Following is the idea I was trying to use: PageA.aspx: <form id="frmPageA" runat="server"> <p>Name: <asp:TextBox ID="txtName" runat="server"></asp:TextBox></p> <p>Age: <asp:TextBox ID="txtAge" runat="server"></asp:TextBox></p> <p><asp:Button ID="btnPostToPageB" runat="server" Text="Post To PageB" PostBackUrl="~/PageB.aspx" /></p> </form> In Page A Codebehind file I am creating following public properties of the inputs to access in Page B: public string Name { get { return txtName.Text.ToString(); } } public int Age { get { return Convert.ToInt32(txtAge.Text); } } In PageB.aspx: using previouspage directive to access page A public properties <%@ PreviousPageType VirtualPath="~/PageA.aspx" % <form id="frmPageB" runat="server"> <asp:HiddenField ID="hfName" runat="server" /> <asp:HiddenField ID="hfAge" runat="server" /> <p><asp:RadioButtonList ID="rblGender" runat="server" TextAlign="Right" RepeatDirection="Horizontal"> <asp:ListItem Value="Female">Female</asp:ListItem> <asp:ListItem Value="Male">Male</asp:ListItem> </asp:RadioButtonList></p> <p><asp:Button ID="btnPostToPageC" runat="server" Text="Post To PageC" PostBackUrl="~/PageC.aspx" /></p> </form> In Page B Codebehind file I am creating following public properties for the inputs to access in Page C: public RadioButtonList Gender { get { return rblGender; } } public string Name { get { return _name; } } public int Age { get { return _age; } } //checking if data is posted from Page A otherwise redirecting User to Page A protected void Page_Load(object sender, EventArgs e) { if (PreviousPage != null && PreviousPage.IsCrossPagePostBack && PreviousPage.IsValid) { hfName.Value = PreviousPage.Name; hfAge.Value = PreviousPage.Age.ToString(); } else Response.Redirect("PageA.aspx"); }` in PageC.aspx: using previouspage directive to access page A public properties <%@ PreviousPageType VirtualPath="~/PageB.aspx" % <form id="frmPageC" runat="server"> <p>Name: <asp:Label ID="lblName" runat="server"></asp:Label></p> <p>Age: <asp:Label ID="lblAge" runat="server"></asp:Label></p> <p>Gender: <asp:Label ID="lblGender" runat="server"></asp:Label></p> <p><asp:Button ID="btnBack" runat="server" Text="Back" PostBackUrl="~/PageA.aspx" /></p> </form> Page C code behind file: if (PreviousPage != null && PreviousPage.IsCrossPagePostBack && PreviousPage.IsValid) { lblName.Text = PreviousPage.Name.ToString(); lblAge.Text = PreviousPage.Age.ToString(); lblGender.Text = PreviousPage.Gender.SelectedValue.ToString(); } else Response.Redirect("PageA.aspx");

    Read the article

  • Compiling ruby1.9.1 hangs and fills swap!

    - by nfm
    I'm compiling Ruby 1.9.1-p376 under Ubuntu 8.04 server LTS (64-bit), by doing the following: $ ./configure $ make $ sudo make install ./configure works without complaints. make hangs indefinitely until all my RAM and swap is gone. It get stuck after the following output: compiling ripper make[1]: Entering directory `/tmp/ruby1.9.1/ruby-1.9.1-p376/ext/ripper' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O2 -g -Wall -Wno-parentheses -o ripper.o -c ripper.c If I run the gcc command by hand, with the -v argument to get verbose output, it hangs after the following: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) /usr/lib/gcc/x86_64-linux-gnu/4.2.4/cc1 -quiet -v -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H="extconf.h" ripper.c -quiet -dumpbase ripper.c -mtune=generic -auxbase-strip ripper.o -g -O2 -Wall -Wno-parentheses -version -fPIC -fstack-protector -fstack-protector -o /tmp/ccRzHvYH.s ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu" ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/4.2.4/../../../../x86_64-linux-gnu/include" ignoring nonexistent directory "/usr/include/x86_64-linux-gnu" ignoring duplicate directory "../.././ext/ripper" ignoring duplicate directory "../../." #include "..." search starts here: #include <...> search starts here: . ../../.ext/include/x86_64-linux ../.././include ../.. /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/4.2.4/include /usr/include End of search list. GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) (x86_64-linux-gnu) compiled by GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4). GGC heuristics: --param ggc-min-expand=47 --param ggc-min-heapsize=32795 Compiler executable checksum: 6e11fa7ca85fc28646173a91f2be2ea3 I just compiled ruby on another computer for reference, and it took about 10 seconds to print the following output (after the above Compiler executable checksum line): COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' as -V -Qy -o ripper.o /tmp/cca4fa7R.s GNU assembler version 2.20 (i486-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.20 COMPILER_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/ LIBRARY_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' I have absolutely no clue what could be going wrong here - any ideas where I should start?

    Read the article

  • WCF via SSL connectivity problems

    - by Brett Widmeier
    Hello, I am hosting a WCF service from inside a Windows service using WAS. When I set the service to listen on 127.0.0.1, I have connectivity from my local machine as well as from my network. However, when I set it to listen on my outbound interface port 443, I can no longer even see the wsdl by connecting with a browser. Strangely, I can connect to the service by using telnet. The cert I am using was generated for my interface by a CA, and I have successfully used this exact cert with this service before. When checking the application log, I see that the service starts without error and is listening on the correct interface. From this information, it seems to me that the config file is in a valid state, but somehow misconfigured for what I want. I have, however, previously deployed this same setup on other sites using this config file. In case it is helpful, below is my config file. Any thoughts? <!--<system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Warning, ActivityTracing" propagateActivity="true"> <listeners> <add type="System.Diagnostics.DefaultTraceListener" name="Default"> <filter type="" /> </add> <add name="ServiceModelTraceListener"> <filter type="" /> </add> </listeners> </source> </sources> <sharedListeners> <add initializeData="app_tracelog.svclog" type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ServiceModelTraceListener" traceOutputOptions="Timestamp"> <filter type="" /> </add> </sharedListeners> </system.diagnostics>--> <appSettings/> <connectionStrings/> <system.serviceModel> <!--<diagnostics> <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog ="1000" maxSizeOfMessageToLog="524288"/> </diagnostics>--> <bindings> <basicHttpBinding> <binding name="basicHttps"> <security mode="Transport"> <transport clientCredentialType="None"/> <message /> </security> </binding> </basicHttpBinding> </bindings> <services> <service behaviorConfiguration="ServiceBehavior" name="<fully qualified name of service>"> <endpoint address="" binding="basicHttpBinding" name="OrdersSoap" contract="<fully qualified name of contract>" bindingNamespace="http://emr.orders.com/WebServices" bindingConfiguration="basicHttps" /> <endpoint binding="mexHttpsBinding" address="mex" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="https://<external IP>/<name of service>>/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceMetadata httpsGetEnabled="False"/> <serviceDebug includeExceptionDetailInFaults="True" /> <dataContractSerializer maxItemsInObjectGraph="2147483646"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>

    Read the article

  • Select number of rows for each group where two column values makes one group

    - by Fábio Antunes
    I have a two select statements joined by UNION ALL. In the first statement a where clause gathers only rows that have been shown previously to the user. The second statement gathers all rows that haven't been shown to the user, therefore I end up with the viewed results first and non-viewed results after. Of course this could simply be achieved with the same select statement using a simple ORDER BY, however the reason for two separate selects is simple after you realize what I hope to accomplish. Consider the following structure and data. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 1 | 1 | 10 | true | .... | | 2 | 10 | 1 | true | .... | | 3 | 1 | 10 | true | .... | | 4 | 6 | 8 | true | .... | | 5 | 1 | 10 | true | .... | | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | | 13 | 10 | 1 | false | .... | | 14 | 1 | 10 | false | .... | | 15 | 6 | 8 | false | .... | | 16 | 10 | 1 | false | .... | | 17 | 8 | 6 | false | .... | | 18 | 3 | 2 | false | .... | +----+------+-----+--------+------+ Basically I wish all non viewed rows to be selected by the statement, that is accomplished by checking weather the viewed column is true or false, pretty simple and straightforward, nothing to worry here. However when it comes to the rows already viewed, meaning the column viewed is TRUE, for those records I only want 3 rows to be returned for each group. The appropriate result in this instance should be the 3 most recent rows of each group. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | +----+------+-----+--------+------+ As you see from the ideal result set we have three groups. Therefore the desired query for the viewed results should show a maximum of 3 rows for each grouping it finds. In this case these groupings were 10 with 1 and 8 with 6, both which had three rows to be shown, while the other group 2 with 3 only had one row to be shown. Please note that where from = x and to = y, makes the same grouping as if it was from = y and to = x. Therefore considering the first grouping (10 with 1), from = 10 and to = 1 is the same group if it was from = 1 and to = 10. However there are plenty of groups in the whole table that I only wish the 3 most recent of each to be returned in the select statement, and thats my problem, I not sure how that can be accomplished in the most efficient way possible considering the table will have hundreds if not thousands of records at some point. Thanks for your help. Note: The columns id, from, to and viewed are indexed, that should help with performance. PS: I'm unsure on how to name this question exactly, if you have a better idea, be my guest and edit the title.

    Read the article

  • Linux server apache httpd processes take i/o wait to close to 100% and lock down server

    - by user3682065
    For about 5 days now, and seemingly out of the blue, my linux server has started locking up from time to time. The pattern is always the same as far as I can tell from top and iotop commands around the time it starts happening: One or more httpd processes (usually one) hang and start using up 100% of CPU power, the %wa goes close to 100% and in the iotop I see several httpd processes with 99.99% in the IO column. I'm also running an SVN server on this machine through apache and the one way that I've been consistently able to reproduce this is to do an SVN commit of new files or an SVN update from the repository on this server (I am the only one using this SVN repository). This will always reproduce this scenario successfully, but until very recently I had no problems at all checking in/out of SVN. But sometimes it just happens for no detectable reason at all it seems. So it seems like there is some issue with my Apache that leads it to have processes use up a lot of read/write upon certain triggers. I was wondering if anyone could help me uncover that issue. EDIT: OK now it's happening again: This is top: [root@server ~]# top top - 10:56:54 up 2:59, 5 users, load average: 171.46, 70.35, 27.01 Tasks: 328 total, 2 running, 326 sleeping, 0 stopped, 0 zombie Cpu(s): 1.9%us, 2.0%sy, 0.0%ni, 0.0%id, 96.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2021144k total, 1968192k used, 52952k free, 2500k buffers Swap: 4194288k total, 2938584k used, 1255704k free, 39008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10390 apache 20 0 2774m 936m 6200 D 2.0 47.4 1:52.27 httpd 2149 root 20 0 927m 13m 1040 S 0.7 0.7 1:50.46 namecoind 11 root 20 0 0 0 0 R 0.3 0.0 0:30.10 events/0 23 root 20 0 0 0 0 S 0.3 0.0 0:17.88 kblockd/1 2049 root 20 0 382m 4932 2880 D 0.3 0.2 0:03.67 httpd 2144 root 20 0 1702m 69m 1164 S 0.3 3.5 5:19.68 bitcoind 6325 root 20 0 15164 1100 656 R 0.3 0.1 0:11.09 top 10311 apache 20 0 387m 9496 7320 D 0.3 0.5 0:01.89 httpd 10313 apache 20 0 391m 10m 7364 D 0.3 0.5 0:02.40 httpd 10466 apache 20 0 399m 12m 7392 D 0.3 0.7 0:02.41 httpd 10599 apache 20 0 391m 9324 7340 D 0.3 0.5 0:00.15 httpd 10628 apache 20 0 384m 7620 4052 D 0.3 0.4 0:00.01 httpd 10633 apache 20 0 384m 7048 3504 D 0.3 0.3 0:00.01 httpd 10634 apache 20 0 384m 8012 4048 D 0.3 0.4 0:00.02 httpd 10638 apache 20 0 400m 22m 9.8m D 0.3 1.1 0:01.93 httpd 10640 apache 20 0 385m 8288 4028 D 0.3 0.4 0:00.03 httpd 10641 apache 20 0 401m 21m 6376 D 0.3 1.1 0:01.45 httpd 10759 apache 20 0 385m 8816 3480 D 0.3 0.4 0:01.45 httpd 10773 apache 20 0 384m 8044 3464 D 0.3 0.4 0:00.02 httpd This is an iotop snapshot: Total DISK READ: 5.93 M/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 10732 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 58.48 % httpd 876 be/3 root 0.00 B/s 52.68 K/s 0.00 % 52.98 % [jbd2/dm-1-8] 10906 be/4 root 124.17 K/s 0.00 B/s 0.00 % 23.03 % sh -c [ -x /usr/local/psa/admin/sbin/backupmng ] && /usr/local/psa/admin/sbin/backupmng >/dev/null 2>&1 2156 be/4 root 206.94 K/s 0.00 B/s 0.00 % 21.15 % bitcoind 10904 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 18.94 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 10773 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 14.77 % httpd 10641 be/4 apache 15.05 K/s 0.00 B/s 0.00 % 11.57 % httpd 10399 be/4 apache 1057.29 K/s 0.00 B/s 43.16 % 10.56 % httpd 10682 be/4 sw-cp-se 158.03 K/s 0.00 B/s 0.00 % 7.45 % sw-engine-cgi -c /usr/local/psa/admin/conf/php.ini -d auto_prepend_file=auth.php3 -u psaadm 10774 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 6.53 % httpd 10624 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 5.53 % httpd 10356 be/4 apache 899.26 K/s 0.00 B/s 35.52 % 4.01 % httpd 10795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 3.93 % httpd 10804 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 3.08 % httpd 4379 be/4 root 2.89 M/s 0.00 B/s 99.99 % 0.00 % namecoind 10619 be/4 apache 462.80 K/s 0.00 B/s 7.80 % 0.00 % httpd 10636 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 0.00 % httpd 10716 be/4 mysql 105.35 K/s 0.00 B/s 5.92 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 1988 be/4 root 18.81 K/s 0.00 B/s 0.00 % 0.00 % spamd_full.sock I also ran lsof -p for pid 10390 which was way up top under the top command and this is the bottom line where I can sort of see what request this was and it says CLOSE_WAIT: httpd 10390 apache 34u IPv6 315879 0t0 TCP default-domain.com:https->crawl-66-249-65-91.googlebot.com:42907 (CLOSE_WAIT) I'm still not sure what exactly is causing this all to happen though? I killed that service but %wa and load average remain high, I also stopped mysqld and other services. It really only goes down once I stop httpd altogether, and even then I can't start it without finding remaining hanging httpd processes via "netstat -tulpn", killing those or doing "killall -9 httpd" and after waiting a while for it to cycle through all those then doing /etc/init.d/httpd start

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • Remote host: can tracert, can telnet, can*not* browse: what gives?

    - by MacThePenguin
    One of my customers of the company I work for has made a change to their Internet connection, and now we can't connect to them any more from our LAN. To help me troubleshoot this issue, the network guy on the customer's site has configured their firewall so that a HTTPS connection to their public IP address is open to any IP. I should put https://<customer's IP> in my browser and get a web page. Well, it works from any network I've tried (even from my smartphone), just not from my company's LAN. I thought it may be an issue with our firewall (though I checked its rules and it allows outbound TCP port 443 to anywhere), so I just connected a PC directly to the network connection of our provider, bypassing out firewall completely, and still it didn't work (everything else worked). So I asked for help to our Internet provider's customer service, and they asked me to do a tracert to our customer's IP. The tracert is successful, as the final hop shown in the output is the host I want to reach. So they said there's no problem. :( I also tried telnet <customer's IP> 443 and that works as well: I get a blank page with the cursor blinking (I've tried using another random port and that gives me an error message, as it should). Still, from any browser of any PC in my LAN I can't open that URL. I tried checking the network traffic with Wireshark: I see the packages going through and answers coming back, thought the packets I see passing are far less than they are if I successfully connect to another HTTPS website. See the attached screenshot: I had to blur the IPs, anyway the longer string is my PC's local IP address, the shorter one is the customer's public IP. I don't know what else to try. This is the only IP doing this... Any idea what could I try to find a solution to this issue? Thanks, let me know if you need further details. Edit: when I say "it doesn't work" I mean: the page doesn't open, the browser keeps loading for a long time and eventually shows an error saying that the page cannot be opened. I'm not in my office now so I can't paste the exact message, but it's the usual message you get when the browser reaches its timeout. When I say "it works", I mean the browser loads and shows a webpage (it's the logon page for the customers' firewall admin interface: so there's the firewall brand's logo and there are fields to enter a user id and a password). Update 13/09/2012: tried again to connect to the customer's network through our Internet connection without a firewall. This is what I did: Run a Kubuntu 12.04 live distro on a spare laptop; Updated all the packages I could and installed WireShark; Attached it to my LAN and verified that I couldn't open https://<customer's IP>. Verified that the Wireshark trace for this attempt was the same as the one I've already posted; Verified that I could connect to another customer's host using rdesktop (it worked); Tried to rdesktop to <customer's IP>, here's the output: kubuntu@kubuntu:/etc$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: recv: Connection reset by peer Disconnected the laptop from the LAN; Disconnected the firewall from the Extranet connection, connected the laptop instead. Set its network configuration so that I could access the Internet; Verified that I could connect to other websites in http and https and in RDP to other customers' hosts - it all worked as expected; Verified that I could still traceroute to <customer's IP>: I could; Verified that I still couldn't open https://<customer's IP> (same exact result as before); Checked the WireShark trace for this attempt and noticed a different behaviour: I could see packets going out to the customer's IP, but no replies at all; Tried to run rdesktop again, with a slightly different result: kubuntu@kubuntu:/etc/network$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: <customer's IP>: unable to connect Finally gave up, put everything back as it was before, turned off the laptop and lost the WireShark traces I had saved. :( I still remember them very well though. :) Can you get anything out of it? Thank you very much. Update 12/09/2012 n.2: I followed the suggestion by MadHatter in the comments. From inside the firewall, this is what I get: user@ubuntu-mantis:~$ openssl s_client -connect <customer's IP>:443 CONNECTED(00000003) If I now type GET / the output pauses for several seconds and then I get: write:errno=104 I'm going to try the same, but bypassing the firewall, as soon as I can. Thanks. Update 12/09/2012 n.3: So, I think ISA Server is altering the results of my tests... I tried installing Wireshark directly on the firewall and monitoring the packets on the Extranet network card. When the destination is the customer's IP, whatever service I try to connect to (HTTPS, RDP or SAProuter), I can only see outbound packets and no response packets whatsoever from their side. It looks like ISA Server is "faking" the remote server's replies, that's why I get a connection using telnet or the openSSL client. This is the wireshark trace from inside our LAN: But this is the trace on the Extranet network card: This makes a bit more sense... I'll send this info to the customer's tech and see if he can make anything out of it. Thanks to all that took the time to read my question and post suggestions. I'll update this post again.

    Read the article

  • How to write specs with MSpec for code that changes Thread.CurrentPrincipal?

    - by Dan Jensen
    I've been converting some old specs to MSpec (were using NUnit/SpecUnit). The specs are for a view model, and the view model in question does some custom security checking. We have a helper method in our specs which will setup fake security credentials for the Thread.CurrentPrincipal. This worked fine in the old unit tests, but fails in MSpec. Specifically, I'm getting this exception: "System.Runtime.Serialization.SerializationException: Type is not resolved for member" It happens when part of the SUT tries to read the app config file. If I comment out the line which sets the CurrentPrincipal (or simply call it after the part that checks the config file), the error goes away, but the tests fail due to lack of credentials. Similarly, if I set the CurrentPrincipal to null, the error goes away, but again the tests fail because the credentials aren't set. I've googled this, and found some posts about making sure the custom principal is serializable when it crosses AppDomain boundaries (usually in reference to web apps). In our case, this is not a web app, and I'm not crossing any AppDomains. Our pincipal object is also serializable. I downloaded the source for MSpec, and found that the ConsoleRunner calls a class named AppDomainRunner. I haven't debugged into it, but it looks like it's running the specs in different app domains. So does anyone have any ideas on how I can overcome this? I really like MSpec, and would love to use it exclusively. But I need to be able to supply fake security credentials while running the tests. Thanks! Update: here's the spec class: [Subject(typeof(CountryPickerViewModel))] public class When_the_user_makes_a_selection : PickerViewModelSpecsBase { protected static CountryPickerViewModel picker; Establish context = () => { SetupFakeSecurityCredentials(); CreateFactoryStubs(); StubLookupServicer<ICountryLookupServicer>() .WithData(BuildActiveItems(new [] { "USA", "UK" })); picker = new CountryPickerViewModel(ViewFactory, ViewModelFactory, BusinessLogicFactory, CacheFactory); }; Because of = () => picker.SelectedItem = picker.Items[0]; Behaves_like<Picker_that_has_a_selected_item> a_picker_with_a_selection; } We have a number of these "picker" view models, all of which exhibit some common behavior. So I'm using the Behaviors feature of MSpec. This particular class is simulating the user selecting something from the (WPF) control which is bound to this VM. The SetupFakeSecurityCredentials() method is simply setting Thread.CurrentPrincipal to an instance of our custom principal, where the prinipal has been populated will full-access rights. Here's a fake CountryPickerViewModel which is enough to cause the error: public class CountryPickerViewModel { public CountryPickerViewModel(IViewFactory viewFactory, IViewModelFactory viewModelFactory, ICoreBusinessLogicFactory businessLogicFactory, ICacheFactory cacheFactory) { Items = new Collection<int>(); var validator = ValidationFactory.CreateValidator<object>(); } public int SelectedItem { get; set; } public Collection<int> Items { get; private set; } } It's the ValidationFactory call which blows up. ValidationFactory is an Enterprise Library object, which tries to access the config.

    Read the article

  • How to find problem with PHP XSLTProcessor when return from transformToXML is false and libxml_get_

    - by John
    I'm working on the code below to allow HTTP user agents that cannot perform XSL transformations to view the resources on my server. I'm mystified because the result of transformToXML is false, but the result of libxml_get_errors() is an empty array. As you can see, the code outputs the LibXSLT version ID and I'm getting the problem on WinVista with version 1.1.24. Is libxml_get_errors() not the right function to get the errors from the XSLTProcessor object? If you're interested in the XML documents, you can get them from http://bobberinteractive.com/index.xhtml and .../stylesheets/layout.xsl <?php //redirect browsers that can handle the source files. if (strpos ( $_SERVER ['HTTP_ACCEPT'], 'application/xhtml+xml' )) { header ( "HTTP/1.1 301 Moved Permanently" ); header ( "Location: http://" . $_SERVER ['SERVER_NAME'] . "/index.xhtml" ); header ( "Content-Type: text/text" ); echo "\nYour browser is capable of processing the <a href='/index.xhtml' site contents on its own."; die (); } //start by checking the template $baseDir = dirname ( __FILE__ ); $xslDoc = new DOMDocument (); if (! $xslDoc-load ( $baseDir . '/stylesheets/layout.xsl' )) { header ( "HTTP/1.1 500 Server Error" ); header ( "Content-Type: text/plain" ); echo "\n Can't load " . $baseDir . '/stylesheets/layout.xsl'; die (); } //resolve the requested resource (browsers that need transformation will request the resource without the suffix) $uri = $_SERVER ['REQUEST_URI']; $len = strlen ( $uri ); if (1 = $len || '/' == substr ( $uri, $len - 1 )) { $fileName = $baseDir . "/index.xhtml"; // use 'default' document if pathname ends in '/' } else { $fileName = $baseDir . (1 load ( $fileName )) { header ( "HTTP/1.1 500 Server Error" ); echo "\n Can't load " . $fileName; die (); } // now start the XSL template processing $proc = new XSLTProcessor (); $proc-importStylesheet ( $xslDoc ); $doc = $proc-transformToXML ( $xmlDoc ); if (false === $doc) { header ( "HTTP/1.1 500 Server Error" ); header ( "Content-Type: text/plain" ); echo "\n"; // HERE is where it gets strange: the value of $doc is false and libxml_get_errors returns 0 entries. display_xml_errors ( libxml_get_errors() ); die (); } header ( "Content-Type: text/html" ); echo "\n"; echo $doc; function display_xml_errors($errors) { echo count ( $errors ) . " Error(s) from LibXSLT " . LIBXSLT_DOTTED_VERSION; for($i = 0; $i level) { case LIBXML_ERR_WARNING : $return .= "Warning $error-code: "; break; case LIBXML_ERR_ERROR : $return .= "Error $error-code: "; break; case LIBXML_ERR_FATAL : $return .= "Fatal Error $error-code: "; break; } $return .= trim ( $error-message ) . "\n Line: $error-line" . "\n Column: $error-column"; if ($error-file) { $return .= "\n File: $error-file"; } echo "$return\n\n--------------------------------------------\n\n"; } }

    Read the article

  • Openfire and LDAP issues

    - by clsmith
    Thanks in advance for the help. Has anyone see this issue with openfire? Currently I use Openfire Fedora with Auth using windows 2003 and also use mysql for the database. When I bring up two clients and talk to each other the time is slow between messages. Sometimes it can take between 5-15 minutes for something sent to get to the person (this is with only two people on the openfire server). I ran a tcp dump using port 389 and see that the machine is running thousands of queries against ldap. When i plug it into wireshark I notice that it is transferring the entire contact list or checking on the status of the entire contact list ? When I run debug on openfire itself I am presented with only this small message in the log: 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Trying to find a groups's DN based on it's groupname. cn: Spark agents CLT, Base DN: OU="Hidden",DC="Hidden",DC="net"... 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished 2010.06.08 07:01:17 LdapManager: Trying to find a groups's DN based on it's groupname. cn: Spark agents CLT, Base DN: OU="Hidden",DC="Hidden",DC="net"... 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished I thought this was a configuration on my end and started to look into the cache settings on the openfire webpages. I tweaked the settings as recommend by the pages and still get the same issues. I doesnt seem to cache the contact list or this might be a feature never fixed or implemented. Has anyone gone through this before ? I have searched online and I see others have great experience with openfire with no issues like I have, or is it because noone checked the queries ? For the time being I created a new Domain Controller and moved openfire to that computer so it can run local queries. This seems to help reduce the speed alot, but when I run the server performance manager tool I see that with two people only using that openfire server I run 593.7 request per second. Thanks for your help, if I didnt provide enough data please let me know what you need and I can find it.

    Read the article

  • Input not cleared.

    - by SoulBeaver
    As the question says, for some reason my program is not flushing the input or using my variables in ways that I cannot identify at the moment. This is for a homework project that I've gone beyond what I had to do for it, now I just want the program to actually work :P Details to make the finding easier: The program executes flawlessly on the first run through. All throws work, only the proper values( n 0 ) are accepted and turned into binary. As soon as I enter my terminate input, the program goes into a loop and only asks for the termiante again like so: When I run this program on Netbeans on my Linux Laptop, the program crashes after I input the terminate value. On Visual C++ on Windows it goes into the loop like just described. In the code I have tried to clear every stream and initialze every variable new as the program restarts, but to no avail. I just can't see my mistake. I believe the error to lie in either the main function: int main( void ) { vector<int> store; int terminate = 1; do { int num = 0; string input = ""; if( cin.fail() ) { cin.clear(); cin.ignore( numeric_limits<streamsize>::max(), '\n' ); } cout << "Please enter a natural number." << endl; readLine( input, num ); cout << "\nThank you. Number is being processed..." << endl; workNum( num, store ); line; cout << "Go again? 0 to terminate." << endl; cin >> terminate // No checking yet, just want it to work! cin.clear(); }while( terminate ); cin.get(); return 0; } or in the function that reads the number: void readLine( string &input, int &num ) { int buf = 1; stringstream ss; vec_sz size; if( ss.fail() ) { ss.clear(); ss.ignore( numeric_limits<streamsize>::max(), '\n' ); } if( getline( cin, input ) ) { size = input.size(); for( int loop = 0; loop < size; ++loop ) if( isalpha( input[loop] ) ) throw domain_error( "Invalid Input." ); ss << input; ss >> buf; if( buf <= 0 ) throw domain_error( "Invalid Input." ); num = buf; ss.clear(); } }

    Read the article

  • ie9/CSS: Flyout menu not working in ie9, but looks great in Firefox/Chrome

    - by Brandon
    Please see this flyout menu: http://www.caseen.com/store.html. It looks amazing in both Firefox and Chrome, but not in IE9! Trying to see what is going on =(. It looks like ie9 is completing ignoring the stylesheet, but in error checking and clicking ie9 direct mode, it shows up however VERY ugly with huge nasty white borders around the links! Please see my code: <div class="flyout"> <ul> <!--START: CATEGORIES--> <!--START: CATEGORY_FORMAT--> <li><a href="view_category.asp?cat=CATID">&nbsp;CATEGORY</a> <!--END: CATEGORY_FORMAT--> <ul><!--START: SUB_CATEGORY_FORMAT--> <li><a href="view_category.asp?cat=CATID">&nbsp;CATEGORY</a></li> <!--END: SUB_CATEGORY_FORMAT--></ul> <!--END: CATEGORIES--> </li> </ul> </div> AND CSS .flyout { width: 130px; height: auto; position:relative; margin: -10 0; padding: 0; z-index:10000; } .flyout ul li a { display:block; text-decoration:none; color: #fff; width: 130px; border: solid; border-color: #000; border-width: 0 0 0 5px; text-align:left; font-size:12px; line-height: 25px; } .flyout ul { padding:0px; list-style-type: none; } .flyout ul li { float:left; margin-right:1px; position:relative; } .flyout ul li ul { display: none; } .flyout ul li:hover a { border: solid; border-color: #fff; border-width: 0 2 0 5px; color: #60dfe5; } .flyout ul li:hover ul { display:block; position:absolute; top:0; left:130px; width:10px; } .flyout ul li:hover ul li a.hide { background:#000; color:#fff; } .flyout ul li:hover ul li:hover a.hide {width:180px;} .flyout ul li:hover ul li ul {display: none;} .flyout ul li:hover ul li a { display:block; background:#000; color:#60dfe5; width:200px; } .flyout ul li:hover ul li a:hover { background:#000; color:#fff; } Thanks, Brandon

    Read the article

  • Detecting Acceleration in a car (iPhone Accelerometer)

    - by TheGazzardian
    Hello, I am working on an iPhone app where we are trying to calculate the acceleration of a moving car. Similar apps have accomplished this (Dynolicious), but the difference is that this app is designed to be used during general city driving, not on a drag strip. This leads us to one big concern that Dynolicious was luckily able to avoid: hills. Yes, hills. There are two important stages to this: calibration, and actual driving. Our initial run was simple and suffered the consequences. During the calibration stage, I took the average force on the phone, and during running, I just subtracted the average force from the current force to get the current acceleration this frame. The problem with this is that the typical car receives much more force than just the forward force - everything from turning to potholes was causing the values to go out of sync with what was really happening. The next run was to add the condition that the iPhone must be oriented in such a way that the screen was facing toward the back of the car. Using this method, I attempted to follow only force on the z-axis, but this obviously lead to problems unless the iPhone was oriented directly upright, because of gravity. Some trigonometry later, and I had managed to work gravity out of the equation, so that the car was actually being read very, very well by the iPhone. Until I hit a slope. As soon as the angle of the car changed, suddenly I was receiving accelerations and decelerations that didn't make sense, and we were once again going out of sync. Talking with someone a lot smarter than me at math lead to a solution that I have been trying to implement for longer than I would like to admit. It's steps are as follows: 1) During calibration, measure gravity as a vector instead of a size. Store that vector. 2) When the car initially moves forward, take the vector of motion and subtract gravity. Use this as the forward momentum. (Ignore, for now, the user cases where this will be difficult and let's concentrate on the math :) 3) From the forward vector and the gravity vector, construct a plane. 4) Whenever a force is received, project it onto said plane to get rid of sideways force/etc. 5) Then, use that force, the known magnitude of gravity, and the known direction of forward motion to essentially solve a triangle to get the forward vector. The problem that is causing the most difficulty in this new system is not step 5, which I have gotten to the point where all the numbers look as they should. The difficult part is actually the detection of the forward vector. I am selecting vectors whose magnitude exceeds gravity, and from there, averaging them and subtracting gravity. (I am doing some error checking to make sure that I am not using a force just because the iPhone accelerometer was off by a bit, which happens more frequently than I would like). But if I plot these vectors that I am using, they actually vary by an angle of about 20-30 degrees, which can lead to some strong inaccuracies. The end result is that the app is even more inaccurate now than before. So basically - all you math and iPhone brains out there - any glaring errors? Any potentially better solutions? Any experience that could be useful at all? Award: offering a bounty of $250 to the first answer that leads to a solution.

    Read the article

  • ActiveRecord Logic Challenge - Smart Ways to Use AR Timestamp

    - by keruilin
    My question is somewhat specific to my app's issue, but the answer should be instructive in terms of use cases for association logic and the record timestamp. I have an NBA pick 'em game where I want to award badges for picking x number of games in a row correctly -- 10, 20, 30. Here are the models, attributes, and associations in-play: User id Pick id result # (values can be 'W', 'L', 'T', or nil. nil means hasn't resolved yet.) resolved # (values can be true, false, or nil.) game_time created_at *Note: There are cases where a pick's result field and resolved field will always be nil. Perhaps the game was cancelled. Badge id Award id user_id badge_id created_at User has many awards. User has many picks. Pick belongs to user. Badge has many awards. Award belongs to user. Award belongs to badge. One of the important rules here to capture in the code is that while a user can be awarded multiple streak badges (e.g., a user can win multiple 10-streak badges), the user CAN'T be awarded another badge for consecutive winning picks that were previously granted an award badge. One way to think of this is that all the dates of the winning picks must come after the date that the streak badge was awarded. For example, let's pretend that a user made 13 winning picks from May 5 to May 8, with the 10th winning pick occurring on May 7, and the last 3 on May 8. The user would be awarded a 10-streak badge on May 7. Now if the user makes another winning pick on May 9, the code must recognize that the user only has a streak of 4 winning picks, not 14, because the user already received an award for the first 10. Now let's assume that the user makes 6 more winning picks. In this case, the code must recognize that all winning picks since May 5 are eligible for a 20-streak badge award, and make the award. Another important rule is that when looking at a winning streak, we don't care about the game time, but rather when the pick was made (created_at). For example, let's say that Team A plays Team B on Sat. And Team C plays Team D on Sun. If the user picks Team C to beat Team D on Thurs, and Team A to beat Team C on Fri, and Team A wins on Sat, but Team C loses on Sun, then the user has a losing streak of 1. So when must the streak-check kick-in? As soon as a pick is a win. If it's a loss or tie, no point in checking. One more note: if the pick is not resolved (false) and the result is nil, that means the game was postponed and must be factored out. With all that said, what is the most efficient, effective and lean way to determine whether a user has a 10-, 20- or 30-win streak?

    Read the article

  • Flash Media Server Streaming: Content Protection

    - by dbemerlin
    Hi, i have to implement flash streaming for the relaunch of our video-on-demand system but either because i haven't worked with flash-related systems before or because i'm too stupid i cannot get the system to work as it has to. We need: Per file & user access control with checks on a WebService every minute if the lease time ran out mid-stream: cancelling the stream rtmp streaming dynamic bandwidth checking Video Playback with Flowplayer (existing license) I've got the streaming and bandwidth check working, i just can't seem to get the access control working. I have no idea how i know which file is played back or how i can play back a file depending on a key the user has entered. Server-Side Code (main.asc): application.onAppStart = function() { trace("Starting application"); this.payload = new Array(); for (var i=0; i < 1200; i++) { this.payload[i] = Math.random(); //16K approx } } application.onConnect = function( p_client, p_autoSenseBW ) { p_client.writeAccess = ""; trace("client at : " + p_client.uri); trace("client from : " + p_client.referrer); trace("client page: " + p_client.pageUrl); // try to get something from the query string: works var i = 0; for (i = 0; i < p_client.uri.length; ++i) { if (p_client.uri[i] == '?') { ++i; break; } } var loadVars = new LoadVars(); loadVars.decode(p_client.uri.substr(i)); trace(loadVars.toString()); trace(loadVars['foo']); // And accept the connection this.acceptConnection(p_client); trace("accepted!"); //this.rejectConnection(p_client); // A connection from Flash 8 & 9 FLV Playback component based client // requires the following code. if (p_autoSenseBW) { p_client.checkBandwidth(); } else { p_client.call("onBWDone"); } trace("Done connecting"); } application.onDisconnect = function(client) { trace("client disconnecting!"); } Client.prototype.getStreamLength = function(p_streamName) { trace("getStreamLength:" + p_streamName); return Stream.length(p_streamName); } Client.prototype.checkBandwidth = function() { application.calculateClientBw(this); } application.calculateClientBw = function(p_client) { /* lots of lines copied from an adobe sample, appear to work */ } Client-Side Code: <head> <script type="text/javascript" src="flowplayer-3.1.4.min.js"></script> </head> <body> <a class="rtmp" href="rtmp://xx.xx.xx.xx/vod_project/test_flv.flv" style="display: block; width: 520px; height: 330px" id="player"> </a> <script> $f( "player", "flowplayer-3.1.5.swf", { clip: { provider: 'rtmp', autoPlay: false, url: 'test_flv' }, plugins: { rtmp: { url: 'flowplayer.rtmp-3.1.3.swf', netConnectionUrl: 'rtmp://xx.xx.xx.xx/vod_project?foo=bar' } } } ); </script> </body> My first Idea was to get a key from the Query String, ask the web service about which file and user that key is for and play the file but i can't seem to find out how to play a file from server side. My second idea was to let flowplayer play a file, pass the key as query string and if the filename and key don't match then reject the connection but i can't seem to find out which file it's currently playing. The only remaining idea i have is: create a list of all files the user is allowed to open and set allowReadAccess or however it was called to allow those files, but that would be clumsy due to the current infrastructure. Any hints? Thanks.

    Read the article

  • Deployment Workbench no longer available after PXE boot

    - by Patrick
    Our build process revolves around windows Deployment Workbench. Unfortunately this was setup by someone who is no longer with the company, and no-one has ever dared/needed to make any changes. The other day it stopped working. It turns out that one of our build guys started thinking about changing some stuff in it, clicked something and now it no longer works (He is saying now that he right clicked on the 'LAB' entry in 'Deployment points' and hit 'Update', which took some time to run through apparently). The job has fallen on me to resolve and frankly I'm not sure what I'm doing. I was wondering if someone with more experience than me can provide some pointers as to troubleshooting cos I'm feeling quite a lot in the dark here. On the server I have Deployment Workbench up and running (MMC snapin) version 3.0. There is a WDS service that appears to be running ok, as does the tFTPd service. Nothing specific to this in event logs. From the client side; PXE boot works and gets you to the Win PE launch, and it has the correct company logo as the background (proving to me that its loading win PE from the network). WPEINIT runs, and asks for domain credentials, here the team simply put User/Pass/Domain in the boxes and click ok. Normally the build would kick off. Instead they get an error message saying that the \NATBLU01\Distribution$ share isn't available. Checking \NATBLU01\Distribution$ shows that its there and accessible over the network. Security/permissions seem ok, even 'ANONYMOUS LOGON' has read access to that share so I don't see that being a problem. Digging the trace files from C:\MININT\SMSOSD\OSDLOGS\ after an attempt to run the build I can see an error saying much the same - <![LOG[Validating connection to \\NATBLU01\Distribution$]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[FindFile: The file OSDConnectToUNC.exe could not be found in any standard locations.]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[The network location cannot be reached. For information about network troubleshooting, see Windows Help.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> <![LOG[ERROR - Unable to map a network drive to \\NATBLU01\Distribution$.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> BDD.LOG shows much the same. Full copies of the .LOG files can both files be found here : BDD.LOG LITETOUCH.LOG I can get to a command prompt from the Win PE that boots from PXE, however there isn't any network stuff there. IPCONFIG returns nothing so none of the tests I would usually run resolve anything. I'm at a loss frankly. I did wonder if I could perhaps start a new build process but if the change to the DeploymentWorkbench has knocked it offline I don't think I'm going to be able to create a new deployment. Failing that; we do have a deployment point labeled type 'Media' which appears to be a DVD ISO image of one of the builds, but its dated 2008, is it possible to export the network build to .ISO and build from DVD? We are looking at new hardware to run this from anyway (for the impending Windows 7 roll out) so a temporary work round isn't going to be too much of a problem. All assistance is appreciated! EDIT : OK. Got it working again. Solution was close to Newmanth's idea. The problem was that our PE image didn't appear to be connecting the network. I had an older copy of the PE boot.WIM on a stick that I had been using for other purposes. I booted that and correctly got a network connection. Showed a correct internal IP and could ping out etc etc. However I was still getting the same errors in all the logs and in when wpeinit was running. What I did seperately was to update the PE image that DeploymentWorkbench was pushing out to display a different back ground. I wanted to prove that I was working in the correct place. Turns out that I wasn't. I went and looked at the other deployment stuff we had on this machine, Windows Deployment Services was installed and although all the install images are off line the boot image was online, so I uploaded the copy from my stick to that. Booted straight off. And fixed. Working. Yay! For anyone stumbling across this in the future you may find that although your deployment images are located in the DeploymentWorkbench, the Win PE boot image you are launching from is located in the associated Windows Deployment Services images.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >