Search Results

Search found 4250 results on 170 pages for 'mark mclaren'.

Page 159/170 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Intermittent fillMode=kCAFillModeForwards bug using CAKeyframeAnimation with path

    - by Mark24x7
    I'm having an intermittent problem when I move a UIImageView around the screen using CAKeyframeAnimation. I want the position of the UIImageView to remain where the animation ends when it is done. This bug only happens for certain start and end points. When I use random points it works correctly most of the time, but about 5-15% of the time it fails and snaps back to the pre-animation position. The problem only appears when using CAKeyframeAnimation using the path property. If I use the values property the bug does not appear. I am setting removedOnCompletion = NO, and fillMode = kCAFillModeForwards. I have posted a link to a test Xcode below. Here is my code for setting up the animation. I have a property usePath. When this is YES, the bug appears. When I set usePath to NO, the snap back bug does not happen. In this case I am using a path that is a simple line, but once I resolve this bug with a simple path, I will use a more complex path with curves in it. // create the point CAKeyframeAnimation *moveAnimation = [CAKeyframeAnimation animationWithKeyPath:@"position"]; if (self.usePath) { CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, startPt.x, startPt.y); CGPathAddLineToPoint(path, NULL, endPt.x, endPt.y); moveAnimation.path = path; CGPathRelease(path); } else { moveAnimation.values = [NSArray arrayWithObjects: [NSValue valueWithCGPoint:startPt], [NSValue valueWithCGPoint:endPt], nil]; } moveAnimation.calculationMode = kCAAnimationPaced; moveAnimation.duration = 0.5f; moveAnimation.removedOnCompletion = NO; // leaves presentation layer in final state; preventing snap-back to original state moveAnimation.fillMode = kCAFillModeForwards; moveAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseOut]; // moveAnimation.delegate = self; // start the animation [ball.layer addAnimation:moveAnimation forKey:@"moveAnimation"]; To dl and view my test project goto test project (http://www.24x7digital.com/downloads/PathFillModeBug.zip) Tap the 'Move Ball' button to start the animation of the ball. I have hard coded a start and end point which causes the bug to happen every time. Use the switch to change usePath to YES or NO. When usePath is YES, you will see the snap back bug. When usePath is NO, you will not see the snap back bug. I'm using SDK 3.1.3, but I have seen this bug using SDK 3.0 as well, and I have seen the bug on the Sim and on my iPhone. Any idea on how to fix this or if I am doing something wrong are appreciated. Thanks, Mark.

    Read the article

  • Outlook Marking Email as Junk Email

    - by robertabead
    I know. I sound like a spammer but these emails are completely legitimate email confirmations for people that have signed up for an account on this website we developed. These emails all make it through to various mail providers (gmail, yahoo, aol, hotmail/live) but they always get directed into the Outlook Junk Email folder. I am have tried using Zend Framework mail, PEAR Mail and phpMailer. All of those methods result in the same thing happening. This seemed to start happening after Microsoft released their update to the Outlook Junk Email filter in January of this year. Following is the code in question: include_once('Mail.php'); include_once('Mail/mime.php'); $hdrs = array( 'From' => "Membership <[email protected]>", 'Subject' => 'Test Email', 'Reply-To'=> "[email protected]", 'Message-ID'=> "<" . str_pad(rand(0,12345678),8,'0',STR_PAD_LEFT) . "@mail.example.com>", 'Date'=> date("D, j M Y H:i:s O",time()), 'To'=> '[email protected]' ); $params = array('host'=>'mail.example.com','auth'=>false,'localhost' => 'www.example.com','debug'=>false); $crlf = "\n"; $mime = new Mail_mime($crlf); $mime->setTXTBody("TEST"); $mime->setHTMLBody("<html>\n<body>\nTest\n</body>\n</html>"); $body = $mime->get(); $hdrs = $mime->headers($hdrs); $mail =& Mail::factory('smtp',$params); $t=$mail->send('[email protected]', $hdrs, $body); As you can see we are using the PEAR Mail functionality in this test. This is the most basic test we could run and the above generated email gets dumped into the Outlook Junk Email folder. We have reverse DNS on the mail server and it matches the forward DNS, SPF and DKIM are set up and there is nothing "spammy" with the above content. Can anybody see something with the above code that could cause Outlook to mark it as Junk? Thanks!

    Read the article

  • CSS selectors : should I make my CSS easier to read or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It make my .css file harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • H.264 over RTP - Identify SPS and PPS Frames

    - by Toby
    I have a raw H.264 Stream from an IP Camera packed in RTP frames. I want to get raw H.264 data into a file so I can convert it with ffmpeg. So when I want to write the data into my raw H.264 file I found out it has to look like this: 00 00 01 [SPS] 00 00 01 [PPS] 00 00 01 [NALByte] [PAYLOAD RTP Frame 1] // Payload always without the first 2 Bytes -> NAL [PAYLOAD RTP Frame 2] [... until PAYLOAD Frame with Mark Bit received] // From here its a new Video Frame 00 00 01 [NAL BYTE] [PAYLOAD RTP Frame 1] .... So I get the SPS and the PPS from the Session Description Protocol out of my preceding RTSP communication. Additionally the camera sends the SPS and the PPSin two single messages before starting with the video stream itself. So I capture the messages in this order: 1. Preceding RTSP Communication here ( including SDP with SPS and PPS ) 2. RTP Frame with Payload: 67 42 80 28 DA 01 40 16 C4 // This is the SPS 3. RTP Frame with Payload: 68 CE 3C 80 // This is the PPS 4. RTP Frame with Payload: ... // Video Data Then there come some Frames with Payload and at some point a RTP Frame with the Marker Bit = 1. This means ( if I got it right) that I have a complete video frame. Afer this I write the Prefix Sequence ( 00 00 01 ) and the NALfrom the payload again and go on with the same procedure. Now my camera sends me after every 8 complete Video Frames the SPS and the PPS again. ( Again in two RTP Frames, as seen in the example above ). I know that especially the PPS can change in between streaming but that's not the problem. My questions are now: 1. Do I need to write the SPS/PPS every 8th Video Frame? If my SPS and my PPS don't change it should be enough to have them written at the very beginning of my file and nothing more? 2. How to distinguish between SPS/PPS and normal RTP Frames? In my C++ Code which parses the transmitted data I need make a difference between the RTP Frames with normal Payload an the ones carrying the SPS/PPS. How can I distinguish them? Okay the SPS/PPS frames are usually way smaller, but that's not a save call to rely on. Because if I ignore them I need to know which data I can throw away, or if I need to write them I need to put the 00 00 01 Prefix in front of them. ? Or is it a fixed rule that they occur every 8th Video Frame?

    Read the article

  • Network communication for a turn based board game

    - by randooom
    Hi all, my first question here, so please don't be to harsh if something went wrong :) I'm currently a CS student (from Germany, if this info is of any use ;) ) and we got a, free selectable, programming assignment, which we have to write in a C++/CLI Windows Forms Application. My team, two others and me, decided to go for a network-compatible port of the board game Risk. We divided the work in 3 Parts, namely UI, game logic and network. Now we're on the part where we have to get everything working together and the big question mark is, how to get the clients synchronized with each other? Our approach so far is, that each client has all information necessary to calculate and/or execute all possible actions. Actually the clients have all information available at all, aside from the game-initializing phase (add players, select map, etc.), which needs one "super-client" with some extra stuff to control things. This is the standard scenario of our approach: player performs action, the action is valid and got executed on the players client action is sent over the network action is executed on the other clients The design (i.e. no or code so far) we came up with so far, is something like the following pseudo sequence diagram. Gui, Controller and Network implement all possible actions (i.e. all actions which change data) as methods from an interface. So each part can implement the method in a way to get their job done. Example with Action(): On the player side's Client: Player-->Gui.Action() Gui-->Controller.Action() Controller-->Logic.Action (Logic.Action() == NoError)? Controller-->Network.Action() Network-->Parser.ParseAction() Network.Send(msg) On all other clients: Network.Recv(msg) Network-->Parser.Deparse(msg) Parser-->Logic.Action() Logic-->Gui.Action() The questions: Is this a viable approach to our task? Any better/easier way to this? Recommendations, critique? Our knowledge (so you can better target your answer): We are on the beginner side, in regards to programming on a somewhat larger projects with a small team. All of us have some general programming experience and basic understanding of the .Net Libraries and Windows Forms. If you need any further information, please feel free to ask.

    Read the article

  • How to make Flash 'play well with others'?

    - by Sensei James
    What up fam. So this isn't a question asking about memory management schemes; for those of you who may not know, the Flash Virtual Machine relies on garbage collection by using reference counting and mark and sweep (for good coverage of these topics, check out Grant Skinner's article and presentation). And yes, Flash also provides the "delete" operator, which can (unfortunately only) be used to remove the properties of dynamic objects. What I want to know is how to make it so that Flash programs don't continue to consume CPU and memory while running in the background (save loading content or communicating remotely, for example). The motivation for this question comes in part from Apple's ban on cross compiled applications (in its SDK 4) on the grounds that they do not behave as predicted with the multitasking feature central to iPhone OS 4. My intention is not only to make Flash programs that will 'pass muster' as far as multitasking in iPhone OS 4, but also to simply make better (behaving) Flash programs. Put another way, how might a Flash application mimic the multitasking feature of iPhone OS 4? Does the Flash API provide the means for a developer to put their applications to 'sleep' while other programs run, and then to 'awaken' them just as quickly? In our own program, we might do something as crude as detecting when the user has been idle (no mouse motion or key press) for (say) four seconds: var idle_id:uint = setInterval(4000, pause_program); var current_movie_clip:MovieClip; var current_frame:uint; ... // on Mouse move or key press... clearInterval(idle_id); idle_id = setInterval(4000, pause_program); ... function pause_program():void { current_movie_clip = event.target as MovieClip; current_frame = current_movie_clip.currentFrame; MovieClip(root).gotoAndStop("program_pause_screen"); } (on the program pause screen) resume_button.addEventListener(MouseEvent.CLICK, resume_program); function resume_program(event:MouseEvent) { current_movie_clip.gotoAndPlay(current_frame); } If that's the right idea, what's the best way to detect that an application should be shelved? And, more importantly, is it possible for Flash Player to detect that some of its running programs are idle, and to similarly shelve them until the user performs an action to resume them? (Please feel free to answer as much or as little of the many questions I've posed.)

    Read the article

  • Marshalling polymorphic objects in JAX-WS

    - by pkchukiss
    I'm creating a JAX-WS type webservice, with operations that return an object WebServiceReply. The class WebServiceReply itself contains a field of type Object. The individual operations would populate that field with a few different data-types, depending on the operation. Publishing the WSDL (I'm using Netbeans 6.7), and getting a ASP.NET application to retrieve and parse the WSDL was fine, but when I tried to call an operation, I would receive the following exception: javax.xml.ws.WebServiceException: javax.xml.bind.MarshalException - with linked exception: [javax.xml.bind.JAXBException: class [LDataObject.Patient; nor any of its super class is known to this context.] How do I mark the annotations in the DataObject.Patient class, as well as the WebServiceReply class to get it to work? I haven't been able to fine a definitive resource on marshalling based upon annotations within the target classes either, so it would be great if anybody could point me to that too. WebServiceReply.java @XmlRootElement(name="WebServiceReply") public class WebServiceReply { private Object returnedObject; private String returnedType; private String message; private String errorMessage; .......... // Getters and setters follow } DataObject.Patient.java @XmlRootElement(name="Patient") public class Patient { private int uid; private Date versionDateTime; private String name; private String identityNumber; private List<Address> addressList; private List<ContactNumber> contactNumberList; private List<Appointment> appointmentList; private List<Case> caseList; } Solution (Thanks to Gregory Mostizky for his answer) I edited the WebServiceReply class so that all the possible return objects extend from a new class ReturnValueBase, and added the annotations using @XmlSeeAlso to ReturnValueBase. JAXB worked properly after that! Nonetheless, I'm still learning about JAXB marshalling in JAX-WS, so it would be great if anyone can still post any tutorial on this. Gregory: you might want to add-on to your answer that the return objects need to sub-class from ReturnValueBase. Thanks a lot for your help! I had been going bonkers over this problem for so long!

    Read the article

  • How can I include additional markup within a 'Content' inner property of an ASP.Net WebControl?

    - by GenericTypeTea
    I've searched the site and I cannot find a solution for my problem, so apologies if it's already been answered (I'm sure someone must have asked this before). I have written a jQuery Popup window that I've packaged up as a WebControl and IScriptControl. The last step is to be able to write the markup within the tags of my control. I've used the InnerProperty attribute a few times, but only for including lists of strongly typed classes. Here's my property on the WebControl: [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public something??? Content { get { if (_content == null) { _content = new something???(); } return _content; } } private something??? _content; Here's the HTML Markup of what I'm after: <ctr:WebPopup runat="server" ID="win_Test" Hidden="false" Width="100px" Height="100px" Modal="true" WindowCaption="Test Window" CssClass="window"> <Content> <div style="display:none;"> <asp:Button runat="server" ID="Button1" OnClick="Button1_Click" /> </div> <%--Etc--%> <%--Etc--%> </Content> </ctr:WebPopup> Unfortunately I don't know what type my Content property should be. I basically need to replicate the UpdatePanel's ContentTemplate. EDIT: So the following allows a Template container to be automatically created, but no controls show up, what's wrong with what I'm doing? [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public ITemplate Content { get { return _content; } set { _content = value; } } private ITemplate _content; EDIT2: Overriding the CreateChildControls allows the controls within the ITemplate to be rendered: protected override void CreateChildControls() { if (this.Content != null) { this.Controls.Clear(); this.Content.InstantiateIn(this); } base.CreateChildControls(); } Unfortunately I cannot now access the controls within the ITemplate from the codebehind file on the file. I.e. if I put a button within my mark as so: <ctr:WebPopup runat="server" ID="win_StatusFilter"> <Content> <asp:Button runat="server" ID="btn_Test" Text="Cannot access this from code behind?" /> </Content> </ctr:WebPopup> I then cannot access btn_Test from the code behind: protected void Page_Load(object sender, EventArgs e) { btn_Test.Text = "btn_Test is not present in Intellisense and is not accessible to the page. It does, however, render correctly."; }

    Read the article

  • jQuery: .toggle() doesnt work properly on two different elements.

    - by Marius
    Hello there, This is my markup: <table class="col1table" cellspacing="0" cellpadding="0"> <tr> <td><a class="tips_trigger" href="#"><img src="/img/design/icon_tips_venn.png" /></a></td> <td><a class="facebook_trigger" href="#"><img src="/img/design/icon_facebook.png" /></a></td> <td><a class="twitter_trigger" href="#"><img src="/img/design/icon_twitter.png" /></a></td> <td><a class="myspace_trigger" href="#"><img src="/img/design/icon_myspace.png" /></a></td> </tr> <tr> <td><a class="tips_trigger" href="#">TIPS EN VENN</a></td> <td><a class="facebook_trigger" href="#">FACEBOOK</a></td> <td><a class="twitter_trigger" href="#">TWITTER</a></td> <td><a class="myspace_trigger" href="#">MYSPACE</a></td> </tr> </table> This is the mark-up for a tool-tip: <div id="message_tips" class="toolTip">Lorem ipsum dolor sit amet.<br /><br /><br /><br /><br /><br />Lorem ipsum dolor sit amet.</div> This is my code to hide/unhide tooltip for .tips_trigger (the tooltip has id: "#message_tips"). Notice that there is one .tips_trigger on each row in the table. And there will be one tooltip per "..._trigger-class". $('.tips_trigger').toggle(function(event){ event.preventDefault(); $('#message_tips').css('display', 'block'); }, function(event){ $('#message_tips').css('display', 'none'); }); I have two problems: 1. Each of the tips_trigger-classes seems to work the script independatly. What I mean by that is if I click tips_trigger in the first row, it displays the tool-tip. If i click tips_trigger in the second row straight after, it displays the tool-tip again. I have to click the exact same tips_trigger-class istance twice for it to hide it. How can I overcome this problem? 2. Each of the "..._trigger"-classes will have a tool-tip, not just ".tips_trigger". Is there a way to alter my current script so that it works for multiple unhides/hides instead of writing one script per class? Kind regards, Marius

    Read the article

  • Google Bookmarks Accelerator for IE8

    - by MACHiNESMiTH
    To Editors: Sorry, I didnt realise the question thing till you pointed it out. Also I didn't know about the appengine tag, I'm assuming it was selected by mistake from that JS autosuggestion box (it usually happens with me, I run a P3) my apologies for that too. I've edited it now, if it doesnt work let me know. Hi all, I'm making (or at least trying to make) a Google bookmarks accelerator for IE8, but I keep running into "Internet Explorer could not install this accelerator. There was a problem with the Accelerator's information." Does anyone know why this is happening? Here is the code I've made <?xml version="1.0" encoding="UTF-8" ?> <os:openServiceDescription xmlns="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.google.com/bookmarks</os:homepageUrl> <os:display> <os:description>Add this page to GoogleBookmarks</os:description> <os:name>Add to GoogleBookmarks</os:name> <os:icon>http://www.google.com/favicon.ico</os:icon> </os:display> <os:activity category="Bookmarks"> <os:activityAction context="document"> <os:execute method="get" action="http://www.google.com/bookmarks/mark?op=add&output=popup"> <os:parameter name="bkmk" value="{documentUrl}" /> <os:parameter name="title" value="{documentTitle}" /> <os:parameter name="annotation" value="{selection}" /> </os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> These are the references I used: MSDN Developers Guide MSDN Format Specification Unofficial Google API (used as a look up so far) and yes i know that I cannot send document variables between schemes (HTTP - HTTPS, for example) or between servers in different security zones (Intranet - Internet etc) Also, if this is the wrong place, what are the recommended sites and/or forums for xml related posts? (not really questions - but more like full blown detailed rant like posts) Thanks

    Read the article

  • Sending multiline message via sockets without closing the connection

    - by Yasir Arsanukaev
    Hello folks. Currently I have this code of my client-side Haskell application: import Network.Socket import Network.BSD import System.IO hiding (hPutStr, hPutStrLn, hGetLine, hGetContents) import System.IO.UTF8 connectserver :: HostName -- ^ Remote hostname, or localhost -> String -- ^ Port number or name -> IO Handle connectserver hostname port = withSocketsDo $ do -- withSocketsDo is required on Windows -- Look up the hostname and port. Either raises an exception -- or returns a nonempty list. First element in that list -- is supposed to be the best option. addrinfos <- getAddrInfo Nothing (Just hostname) (Just port) let serveraddr = head addrinfos -- Establish a socket for communication sock <- socket (addrFamily serveraddr) Stream defaultProtocol -- Mark the socket for keep-alive handling since it may be idle -- for long periods of time setSocketOption sock KeepAlive 1 -- Connect to server connect sock (addrAddress serveraddr) -- Make a Handle out of it for convenience h <- socketToHandle sock ReadWriteMode -- Were going to set buffering to LineBuffering and then -- explicitly call hFlush after each message, below, so that -- messages get logged immediately hSetBuffering h LineBuffering return h sendid :: Handle -> String -> IO String sendid h id = do hPutStr h id -- Make sure that we send data immediately hFlush h -- Retrieve results hGetLine h The code portions in connectserver are from this chapter of Real World Haskell book where they say: When dealing with TCP data, it's often convenient to convert a socket into a Haskell Handle. We do so here, and explicitly set the buffering – an important point for TCP communication. Next, we set up lazy reading from the socket's Handle. For each incoming line, we pass it to handle. After there is no more data – because the remote end has closed the socket – we output a message about that. Since hGetContents blocks until the server closes the socket on the other side, I used hGetLine instead. It satisfied me before I decided to implement multiline output to client. I wouldn't like the server to close a socket every time it finishes sending multiline text. The only simple idea I have at the moment is to count the number of linefeeds and stop reading lines after two subsequent linefeeds. Do you have any better suggestions? Thanks.

    Read the article

  • Creating a simple templated control. Having issues...

    - by Jimock
    Hi, I'm trying to create a really simple templated control. I've never done it before, but I know a lot of my controls I have created in the past would have greatly benefited if I included templating ability - so I'm learning now. The problem I have is that my template is outputted on the page but my property value is not. So all I get is the static text which I include in my template. I must be doing something correctly because the control doesn't cause any errors, so it knows my public property exists. (e.g. if I try to use Container.ThisDoesntExist it throws an exception). I'd appreciate some help on this. I may be just being a complete muppet and missing something. Online tutorials on simple templated server controls seem few and far between, so if you know of one I'd like to know about it. A cut down version of my code is below. Many Thanks, James Here is my code for the control: [ParseChildren(true)] public class TemplatedControl : Control, INamingContainer { private TemplatedControlContainer theContainer; [TemplateContainer(typeof(TemplatedControlContainer)), PersistenceMode(PersistenceMode.InnerProperty)] public ITemplate ItemTemplate { get; set; } protected override void CreateChildControls() { Controls.Clear(); theContainer = new TemplatedControlContainer("Hello World"); this.ItemTemplate.InstantiateIn(theContainer); Controls.Add(theContainer); } } Here is my code for the container: [ToolboxItem(false)] public class TemplatedControlContainer : Control, INamingContainer { private string myString; public string MyString { get { return myString; } } internal TemplatedControlContainer(string mystr) { this.myString = mystr; } } Here is my mark up: <my:TemplatedControl runat="server"> <ItemTemplate> <div style="background-color: Black; color: White;"> Text Here: <%# Container.MyString %> </div> </ItemTemplate> </my:TemplatedControl>

    Read the article

  • Mercurial for Beginners: The Definitive Practical Guide

    - by Laz
    Inspired by Git for beginners: The definitive practical guide. This is a compilation of information on using Mercurial for beginners for practical use. Beginner - a programmer who has touched source control without understanding it very well. Practical - covering situations that the majority of users often encounter - creating a repository, branching, merging, pulling/pushing from/to a remote repository, etc. Notes: Explain how to get something done rather than how something is implemented. Deal with one question per answer. Answer clearly and as concisely as possible. Edit/extend an existing answer rather than create a new answer on the same topic. Please provide a link to the the Mercurial wiki or the HG Book for people who want to learn more. Questions: Installation/Setup How to install Mercurial? How to set up Mercurial? How do you create a new project/repository? How do you configure it to ignore files? Working with the code How do you get the latest code? How do you check out code? How do you commit changes? How do you see what's uncommitted, or the status of your current codebase? How do you destroy unwanted commits? How do you compare two revisions of a file, or your current file and a previous revision? How do you see the history of revisions to a file? How do you handle binary files (visio docs, for instance, or compiler environments)? How do you merge files changed at the "same time"? Tagging, branching, releases, baselines How do you 'mark' 'tag' or 'release' a particular set of revisions for a particular set of files so you can always pull that one later? How do you pull a particular 'release'? How do you branch? How do you merge branches? How do you merge parts of one branch into another branch? Other Good GUI/IDE plugin for Mercurial? Advantages/disadvantages? Any other common tasks a beginner should know? How do I interface with Subversion? Other Mercurial references Mercurial: The Definitive Guide Mercurial Wiki Meet Mercurial | Peepcode Screencast

    Read the article

  • Would someone mind giving suggestions for this new assembly language?

    - by Noctis Skytower
    Greetings! Last semester in college, my teacher in the Computer Languages class taught us the esoteric language named Whitespace. In the interest of learning the language better with a very busy schedule (midterms), I wrote an interpreter and assembler in Python. An assembly language was designed to facilitate writing programs easily, and a sample program was written with the given assembly mnemonics. Now that it is summer, a new project has begun with the objective being to rewrite the interpreter and assembler for Whitespace 0.3, with further developments coming afterwards. Since there is so much extra time than before to work on its design, you are presented here with an outline that provides a revised set of mnemonics for the assembly language. This post is marked as a wiki for their discussion. Have you ever had any experience with assembly languages in the past? Were there some instructions that you thought should have been renamed to something different? Did you find yourself thinking outside the box and with a different paradigm than in which the mnemonics were named? If you can answer yes to any of those questions, you are most welcome here. Subjective answers are appreciated! hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack Question: How would you redesign, rewrite, or rename the previous mnemonics and for what reasons?

    Read the article

  • iPhone noob - setting NSMutableDictionary entry inside Singleton?

    - by codemonkey
    Yet another iPhone/Objective-C noob question. I'm using a singleton to store app state information. I'm including the singleton in a Utilities class that holds it (and eventually other stuff). This utilities class is in turn included and used from various view controllers, etc. The utilities class is set up like this: // Utilities.h #import <Foundation/Foundation.h> @interface Utilities : NSObject { } + (id)GetAppState; - (id)GetAppDelegate; @end // Utilities.m #import "Utilities.h" #import "CHAPPAppDelegate.h" #import "AppState.h" @implementation Utilities CHAPPAppDelegate* GetAppDelegate() { return (CHAPPAppDelegate *)[UIApplication sharedApplication].delegate; } AppState* GetAppState() { return [GetAppDelegate() appState]; } @end ... and the AppState singleton looks like this: // AppState.h #import <Foundation/Foundation.h> @interface AppState : NSObject { NSMutableDictionary *challenge; NSString *challengeID; } @property (nonatomic, retain) NSMutableDictionary *challenge; @property (nonatomic, retain) NSString *challengeID; + (id)appState; @end // AppState.m #import "AppState.h" static AppState *neoAppState = nil; @implementation AppState @synthesize challengeID; @synthesize challenge; # pragma mark Singleton methods + (id)appState { @synchronized(self) { if (neoAppState == nil) [[self alloc] init]; } return neoAppState; } + (id)allocWithZone:(NSZone *)zone { @synchronized(self) { if (neoAppState == nil) { neoAppState = [super allocWithZone:zone]; return neoAppState; } } return nil; } - (id)copyWithZone:(NSZone *)zone { return self; } - (id)retain { return self; } - (unsigned)retainCount { return UINT_MAX; //denotes an object that cannot be released } - (void)release { // never release } - (id)init { if (self = [super init]) { challengeID = [[NSString alloc] initWithString:@"0"]; challenge = [NSMutableDictionary dictionary]; } return self; } - (void)dealloc { // should never be called, but just here for clarity [super dealloc]; } @end ... then, from a view controller I'm able to set the singleton's "challengeID" property like this: [GetAppState() setValue:@"wassup" forKey:@"challengeID"]; ... but when I try to set one of the "challenge" dictionary entry values like this: [[GetAppState() challenge] setObject:@"wassup" forKey:@"wassup"]; ... it fails giving me an "unrecognized selector sent..." error. I'm probably doing something really obviously dumb? Any insights/suggestions will be appreciated.

    Read the article

  • Rails Google Maps integration Javascript problem

    - by JZ
    I'm working on Rails 3.0.0.beta2, following Advanced Rails Recipes "Recipe #32, Mark locations on a Google Map" and I hit a road block: I do not see a google map. My @adds view uses @adds.to_json to connect the google maps api with my model. My database contains "latitude" "longitude", as floating points. And the entire project can be accessed at github. Can you see where I'm not connecting the to_json output with the javascript correctly? Can you see other glairing errors in my javascript? Thanks in advance! My application.js file: function initialize() { if (GBrowserIsCompatible() && typeof adds != 'undefined') { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(37.4419, -122.1419), 13); map.addControl(new GLargeMapControl()); function createMarker(latlng, add) { var marker = new GMarker(latlng); var html="<strong>"+add.first_name+"</strong><br />"+add.address; GEvent.addListener(marker,"click", function() { map.openInfoWindowHtml(latlng, html); }); return marker; } var bounds = new GLatLngBounds; for (var i = 0; i < adds.length; i++) { var latlng=new GLatLng(adds[i].latitude,adds[i].longitude) bounds.extend(latlng); map.addOverlay(createMarker(latlng, adds[i])); } map.setCenter(bounds.getCenter(),map.getBoundsZoomLevel(bounds)); } } window.onload=initialize; window.onunload=GUnload; Layouts/adds.html.erb: <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;sensor=true_or_false&amp;key=ABQIAAAAeH4ThRuftWNHlwYdvcK1QBTJQa0g3IQ9GZqIMmInSLzwtGDKaBQvZChl_y5OHf0juslJRNx7TbxK3Q" type="text/javascript"></script> <% if @adds -%> <script type="text/javascript"> var maps = <%= @adds.to_json %>; </script> <% end -%>

    Read the article

  • Replace click() with document.ready() in jquery....

    - by bala3569
    I downloaded jquery effects example and all effects are appearing only onclick but i want it to be executed on document.ready() and continue... <script type="text/javascript"> var ImgIdx = 2;//To mark which image will be select next function PreloadImg(){ $.ImagePreload("images/im2.jpg"); $.ImagePreload("images/im3.jpg"); $.ImagePreload("images/im4.jpg"); $.ImagePreload("images/im5.jpg"); } $(document).ready(function(){ PreloadImg(); $(".SlashEff ul li").click(function(){ $(".Slash").ImageSwitch({Type:$(this).attr("rel"), NewImage:"images/im"+ImgIdx+".jpg", speed: 4000 }); ImgIdx++; if(ImgIdx>5) ImgIdx = 1; }); }); </script> and my <div class="SlashEff"> <ul> <li class="TryFadeIn" rel="FadeIn">Fade in</li> <li class="TryFlyIn" rel="FlyIn">Fly in</li> <li class="TryFlyOut" rel="FlyOut">Fly out</li> <li class="TryFlipIn" rel="FlipIn">Flip in</li> <li class="TryFlipOut" rel="FlipOut">Flip out</li> <li class="TryScroll" rel="ScrollIn">Scroll in</li> <li class="TryScroll" rel="ScrollOut">Scroll out</li> <li class="TrySingleDoor" rel="SingleDoor">Single Door</li> <li class="TryDoubleDoor" rel="DoubleDoor">Double Door</li> </ul> </div> Here is the link http://www.hieu.co.uk/blog/index.php/imageswitch/ I tried this, $(document).ready(function(){ PreloadImg(); $(".Slash").ImageSwitch({Type:$(this).attr("rel"), NewImage:"images/im"+ImgIdx+".jpg", speed: 4000 }); ImgIdx++; if(ImgIdx>5) ImgIdx = 1; }); I tried this but it gets executed only once.... I want to execute this every 5000ms... Is this possible...

    Read the article

  • How to split xml to header and items using smooks?

    - by palto
    I have a xml file roughly like this: <batch> <header> <headerStuff /> </header> <contents> <timestamp /> <invoices> <invoice> <invoiceStuff /> </invoice> <!-- Insert 1000 invoice elements here --> </invoices> </contents> </batch> I would like to split that file to 1000 files with the same headerStuff and only one invoice. Smooks documentation is very proud of the possibilities of transformations, but unfortunately I don't want to do those. The only way I've figured how to do this is to repeat the whole structure in freemarker. But that feels like repeating the structure unnecessarily. The header has like 30 different tags so there would be lots of work involved also. What I currently have is this: <?xml version="1.0" encoding="UTF-8"?> <smooks-resource-list xmlns="http://www.milyn.org/xsd/smooks-1.1.xsd" xmlns:calc="http://www.milyn.org/xsd/smooks/calc-1.1.xsd" xmlns:frag="http://www.milyn.org/xsd/smooks/fragment-routing-1.2.xsd" xmlns:file="http://www.milyn.org/xsd/smooks/file-routing-1.1.xsd"> <params> <param name="stream.filter.type">SAX</param> </params> <frag:serialize fragment="INVOICE" bindTo="invoiceBean" /> <calc:counter countOnElement="INVOICE" beanId="split_calc" start="1" /> <file:outputStream openOnElement="INVOICE" resourceName="invoiceSplitStream"> <file:fileNamePattern>invoice-${split_calc}.xml</file:fileNamePattern> <file:destinationDirectoryPattern>target/invoices</file:destinationDirectoryPattern> <file:highWaterMark mark="10"/> </file:outputStream> <resource-config selector="INVOICE"> <resource>org.milyn.routing.io.OutputStreamRouter</resource> <param name="beanId">invoiceBean</param> <param name="resourceName">invoiceSplitStream</param> <param name="visitAfter">true</param> </resource-config> </smooks-resource-list> That creates files for each invoice tag, but I don't know how to continue from there to get the header also in the file. EDIT: The solution has to use Smooks. We use it in an application as a generic splitter and just create different smooks configuration files for different types of input files.

    Read the article

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

  • Add new row to asp .net grid view using button

    - by SARAVAN
    Hi, I am working in ASP .net 2.0. I am a learner. I have a grid view which has a button in it. Please find the asp mark up below <form id="form1" runat="server"> <div> <asp:GridView ID="myGridView" runat="server"> <Columns> <asp:TemplateField> <ItemTemplate> <asp:Button CommandName="AddARowBelow" Text="Add A Row Below" runat="server" /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </div> </form> Please find the code behind below. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Data; using System.Web.UI.WebControls; namespace GridViewDemo { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { DataTable dt = new DataTable("myTable"); dt.Columns.Add("col1"); dt.Columns.Add("col2"); dt.Columns.Add("col3"); dt.Rows.Add(1, 2, 3); dt.Rows.Add(1, 2, 3); dt.Rows.Add(1, 2, 3); dt.Rows.Add(1, 2, 3); dt.Rows.Add(1, 2, 3); myGridView.DataSource = dt; myGridView.DataBind(); } protected void myGridView_RowCommand(object sender, GridViewCommandEventArgs e) { } } } I was thinking that when I click the command button, it would fire the mygridview_rowcommand() but instead it threw an error as follows: Invalid postback or callback argument. Event validation is enabled using in configuration or <%@ Page EnableEventValidation="true" % in a page. For security purposes, this feature verifies that arguments to postback or callback events originate from the server control that originally rendered them. If the data is valid and expected, use the ClientScriptManager.RegisterForEventValidation method in order to register the postback or callback data for validation. Can any one let me know on where I am going wrong?

    Read the article

  • Parsing language for both binary and character files

    - by Thorsten S.
    The problem: You have some data and your program needs specified input. For example strings which are numbers. You are searching for a way to transform the original data in a format you need. And the problem is: The source can be anything. It can be XML, property lists, binary which contains the needed data deeply embedded in binary junk. And your output format may vary also: It can be number strings, float, doubles.... You don't want to program. You want routines which gives you commands capable to transform the data in a form you wish. Surely it contains regular expressions, but it is very good designed and it offers capabilities which are sometimes much more easier and more powerful. Something like a super-grep which you can access (!) as program routines, not only as tool. It allows: joining/grouping/merging of results inserting/deleting/finding/replacing write macros which allows to execute a command chain repeatedly meta-grouping (lists-tables-hypertables) Example (No, I am not looking for a solution to this, it is just an example): You want to read xml strings embedded in a binary file with variable length records. Your tool reads the record length and deletes the junk surrounding your text. Now it splits open the xml and extracts the strings. Being Indian number glyphs and containing decimal commas instead of decimal points, your tool transforms it into ASCII and replaces commas with points. Now the results must be stored into matrices of variable length....etc. etc. I am searching for a good language / language-design and if possible, an implementation. Which design do you like or even, if it does not fulfill the conditions, wouldn't you want to miss ? EDIT: The question is if a solution for the problem exists and if yes, which implementations are available. You DO NOT implement your own sorting algorithm if Quicksort, Mergesort and Heapsort is available. You DO NOT invent your own text parsing method if you have regular expressions. You DO NOT invent your own 3D language for graphics if OpenGL/Direct3D is available. There are existing solutions or at least papers describing the problem and giving suggestions. And there are people who may have worked and experienced such problems and who can give ideas and suggestions. The idea that this problem is totally new and I should work out and implement it myself without background knowledge seems for me, I must admit, totally off the mark.

    Read the article

  • How to manage multiple versions of the same record

    - by Darvis Lombardo
    I am doing short-term contract work for a company that is trying to implement a check-in/check-out type of workflow for their database records. Here's how it should work... 1) A user creates a new entity within the application. There are about 20 related tables that will be populated in addition to the main entity table. 2) Once the entity is created the user will mark it as the master. 3) Another user can make changes to the master only by "checking out" the entity. Multiple users can checkout the entity at the same time. 4) Once the user has made all the necessary changes to the entity, they put it in a "needs approval" status. 5) After an authorized user reviews the entity, they can promote it to master which will put the original record in a tombstoned status. The way they are currently accomplishing the "check out" is by duplicating the entity records in all the tables. The primary keys include EntityID + EntityDate, so they duplicate the entity records in all related tables with the same EntityID and an updated EntityDate and give it a status of "checked out". When the record is put into the next state (needs approval), the duplication occurs again. Eventually it will be promoted to master at which time the final record is marked as master and the original master is marked as dead. This design seems hideous to me, but I understand why they've done it. When someone looks up an entity from within the application, they need to see all current versions of that entity. This was a very straightforward way for making that happen. But the fact that they are representing the same entity multiple times within the same table(s) doesn't sit well with me, nor does the fact that they are duplicating EVERY piece of data rather than only storing deltas. I would be interested in hearing your reaction to the design, whether positive or negative. I would also be grateful for any resoures you can point me to that might be useful for seeing how someone else has implemented such a mechanism. Thanks! Darvis

    Read the article

  • Generating custom-form documents from base-form plus XML?

    - by KlaymenDK
    Hi all, this is my first stack overflow, and it's a complex one. Sorry. My task is to generate custom documents from a basic template plus some XML without having a custom form design element for each case. Here's the whole picture: We are building a Lotus Notes (client, not web) application for world-wide application access control; the scope is something like 400.000 users being able to request access to any of 1000+ applications. Each application needs its own request form -- different number of approvers, various info required, that sort of thing. We simply can't have a thousand forms in a database (one per application), and anyway their maintenance really needs to be pushed from the developers to the application owners. So instead of custom forms, we'd like to create a generic "template" form that stores a block of basic fields, but then allows application owners to define another block of fields dynamically -- "I want a mandatory plain-text field named 'Name' here, and then a date field named 'Due' here that must be later than today's date, and then ...". I hope this makes sense (if not, think of it as a generic questionnaire application). I pretty much have the structure in place for designing the dynamic fields (form builder GUI - XML-encoded data - pre-rendered DXL for injecting into a form), including mark-up for field types, value options, and rudimentary field validation instructions. My problem is generating a document with this dynamic content injected at the proper location (without needing a custom form design element for each case). Doing the dynamic content via HTML is out. The Notes client web rendering is simply way too poor, and it would be quite a challenge to implement things like field validation instructions, date selectors, and name look-ups. DXL, on the other hand, would allow us to use native Notes fields and code. As a tech demo, I've managed to implement a custom form generator that injects the pre-rendered DXL for the dynamic content into a base form; but as I said, we don't want a ton of custom form design elements. I've tried to implement a way to create a document with the "store form in document" flag set, but once I've created the document from the base form, I can't get DXL access to the stored form design, and so I can't inject my dynamic content. I know this is not something Notes was ever intended to do. Has anyone ever tried something like it (and gotten away with it)? Thanks for reading this far. With a boatload of thanks in advance, Jan Gundtofte-Bruun

    Read the article

  • ColdFusion's cfquery failing silently

    - by johnthexiii
    I have a query that retrieves a large amount of data. <cfsetting requesttimeout="9999999" > <cfquery name="randomething" datasource="ds" timeout="9999999" > SELECT col1, col2 FROM table </cfquery> <cfdump var="#randomething.recordCount#" /> <!---should be about 5 million rows ---> I can successfully retrieve the data with python's cx_Oracle and using sys.getsizeof on the python list returns 22621060, so about 21 megabytes. ColdFusion does not return an error on the page, and I can't find anything in any of the logs. Why is cfdump not showing the number of rows? Additional Information The reason for doing it this way is because I have about 8000 smaller queries to run against the randomthing query. In other words when I run those 8000 queries against the database it takes hours for that process to complete. I suspect this is because I am competing with several other database users, and the database is getting bogged down. The 8000 smaller queries are getting counts of col1 over a period of col2. SELECT count(col1) as count WHERE col2 < 20121109 AND col2 > 20121108 According to Adam Cameron's suggestions. cflog is suggesting that the query isn't finishing. I tried changing the queries timeout both in the code and in the CFIDE/administrator, apparently CF9 no long respects the timeout attribute, regardless of what I tried I couldn't get the query to timeout. I also started playing around with the maxrows attribute to see if I could discern any information that way. when maxrows is set to 1300000 everything works fine when maxrows is 1400000 or greater I get this error when maxrows is 2000000 I observe my original problem Update So this isn't a limit of cfquery. By using QueryNew then looping over it to add data and I can get well past the 2 million mark without any problems. I also created a ThinClient datasource using the information in this question, I didn't observe any change in behavior. The messages on the database end are SQL*Net message from client and SQL*Net more data to client I just discovered that by using the thin client along with blockfactor1="100" I can retrieve more rows (appx. 3000000).

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >