Search Results

Search found 1423 results on 57 pages for 'catgirl the crazy'.

Page 42/57 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • SimpleDOM sortedXPath date sorting works on localhost but not on remote server.

    - by Imminent
    Here's what i'm trying to do: 1) Take a basic XML page (data.xml) 2) Load it with simpleDOM 3) Use simpleDOM sortedXPath to sort all XML items by their pubDate 4) Display sorted output Here is the code I currently have. My code below outputs exactly what I need when run it on my localhost (XAMPP w/PHP 5.3) but on my remote server (which has at least PHP 5.0+) all is lost and a completely blank page is output. It will output the $xml array with print_r though. Here is my code: <?php include('SimpleDOM.php'); $xml = simpledom_load_file('data.xml'); $dateformat = "D j M, g:ia"; /* print_r($xml); <-array will output on remote server if put here, but alas nothing else beyond this point */ /*Output First 5 items sorted by pubDate*/ foreach($xml->channel->sortedXPath('item','pubDate', SORT_DESC) as $i => $item){ if ($i > 4){ break; } print "<p>This Weeks Deal:<strong> ".$item->title."</strong></p>"; print $item->description; print "<p>Date Posted:<strong> ".date($dateformat, strtotime($item->pubDate))."</strong></p>"; } ?> Like I said, this code seems to work great on my localhost... but not being able to run it on my remote server is making me crazy. Any ideas ?? Any help will be far beyond appreciated.

    Read the article

  • Encoding / Error Correction Challenge

    - by emi1faber
    Is it mathematically feasible to encode and initial 4 byte message into 8 bytes and if one of the 8 bytes is completely dropped and another is wrong to reconstruct the initial 4 byte message? There would be no way to retransmit nor would the location of the dropped byte be known. If one uses Reed Solomon error correction with 4 "parity" bytes tacked on to the end of the 4 "data" bytes, such as DDDDPPPP, and you end up with DDDEPPP (where E is an error) and a parity byte has been dropped, I don't believe there's a way to reconstruct the initial message (although correct me if I am wrong)... What about multiplying (or performing another mathematical operation) the initial 4 byte message by a constant, then utilizing properties of an inverse mathematical operation to determine what byte was dropped. Or, impose some constraints on the structure of the message so every other byte needs to be odd and the others need to be even. Alternatively, instead of bytes, it could also be 4 decimal digits encoded in some fashion into 8 decimal digits where errors could be detected & corrected under the same circumstances mentioned above - no retransmission and the location of the dropped byte is not known. I'm looking for any crazy ideas anyone might have... Any ideas out there?

    Read the article

  • Why won't .attr('checked','checked') set?

    - by Jason
    I have the following snippet of code (I'm using jQuery 1.4.2): $.post('/Ads/GetAdStatsRow/', { 'ad_id': id }, function(result) { $('#edit_ads_form tbody').prepend(result); $(result).find('td.select-ad input').attr('checked','checked').click(); }); Assume that the post works correctly and returns a correct pre-built <tr> with some <td>s. Here's the weirdness: the $(result).find() line finds the correct input (which is a checkbox, as it's the only input in the cell) and runs the chained click() function correctly, but it REFUSES to set the box as checked, which I need to happen. Here's a crazy twist, too... when I get super specific and change the $(result).find() line to this (the id of the checkbox): $('#ad_' + id).click(); It checks the box, but doesn't run the click() function! If I set it to $('#ad_' + id).attr('checked','checked').click(); it runs the click function as though the box were checked, but the box remains unchecked, and if I do $('#ad_' + id).click().attr('checked','checked'); it does nothing at all. What in the world could be the matter with this? I'm running out of hair.... Thanks!

    Read the article

  • Using Word COM objects in .NET, InlineShapes not copied from template to document

    - by Keith
    Using .NET and the Word Interop I am programmatically creating a new Word doc from a template (.dot) file. There are a few ways to do this but I've chosen to use the AttachedTemplate property, as such: Dim oWord As New Word.Application() oWord.Visible = False Dim oDocuments As Word.Documents = oWord.Documents Dim oDoc As Word.Document = oDocuments.Add() oDoc.AttachedTemplate = sTemplatePath oDoc.UpdateStyles() (I'm choosing the AttachedTemplate means of doing this over the Documents.Add() method because of a memory leak issue I discovered when using Documents.Add() to open from templates.) This works fine EXCEPT when there is an image (represented as an InlineShape) in the template footer. In that case the image does not appear in the resulting document. Specifically the image should appear in the oDoc.Sections.Item(1).Footers.Item(WdHeaderFooterIndex.wdHeaderFooterPrimary).Range.InlineShapes collection but it does not. This is not a problem when using Documents.Add(), however as I said that method is not an option for me. Is there an extra step I have to take to get the images from the template? I already discovered that when using AttachedTemplate I have to explicitly call UpdateStyles() (as you can see in my code snippet) to apply the template styles to the document, whereas that is done automatically when using Documents.Add(). Or maybe there's some crazy workaround? Your help is much appreciated! :)

    Read the article

  • Zend php memory memory_limit

    - by RepDetec
    All, I am working on a Zend Framework based web application. We keep encountering out of memory errors on our dev server: Allowed memory size of XXXX bytes exhausted (tried YYYY... We keep increasing memory_limit in php.ini, but it is now up over 1000 megs. What is a normal memory_limit value? What are the usual suspects in php/Zend for running out of memory? We are using the Propel ORM. Thanks for all of the help! Update I cannot reproduce this error in my windows environment. If I set memory_limit low (say 16M), I get the same error, but the "tried to allocate" amount is always something reasonable. For example: (tried to allocate 13344 bytes) If I set the memory very low on the (Fedora 9) server (such as 16M), I get the same thing. consistent, reasonable out of memory errors. However, even when the memory limit is set very high on our server (128M, for example), maybe once a week, I will get an crazy huge memory error: (tried to allocate 1846026201 bytes). I don't know if that might shed any more light onto what is going on. We are using propel 1.5. It sounds like the actual release is going to come out later this month, but it doesn't look like anyone else is having this problem with it anyway. I don't know that Propel is the problem. We are using Zend Server with php 5.2 on the Linux box, and 5.3 locally. Any more ideas? I have a ticket out to get Xdebug installed on the Linux box. Thanks, -rep

    Read the article

  • rails + sheevaplug = rails home development server and more

    - by microspino
    Hello I'd like to build a "Rails Brick" using a Sheevaplug from Marvell (O.S. is Ubuntu out of the box but You can install other distributions on It). It will be a home server and a silent, low cost (99$) low energy development machine. I'd like to add rails RVM, lot of gems, git-based heroku like deployment, passenger + nginx. This way I could have a portable server with a complete development environment and maybe I could find a hosting company where I can co-locate a grid of this devices or I can sell It as a simple little server for 10 or less users offices, with some centralized rails services (I think to a CMS, a BLOG, a WIKI, calendar or whatever this little jewel could afford). The usb port could make It a print server too or a UMTS link to the web via HUAWEI like usb UMTS keys. Can you give me some hint about: Is this project a crazy-close-to-failure idea? Why? which gem would You include? which rails open source app would you suggest? I have already an Excito Bubba Server at home, I saw the TonidoPlug so It came up in my mind to build something similiar but Rails based (Bubba is PHP based, TonidoPlug I don't know but It does not seems a Rails thing).

    Read the article

  • decimal.TryParse() drops leading "1"

    - by Martin Harris
    Short and sweet version: On one machine out of around a hundred test machines decimal.TryParse() is converting "1.01" to 0.01 Okay, this is going to sound crazy but bare with me... We have a client applications that communicates with a webservice through JSON, and that service returns a decimal value as a string so we store it as a string in our model object: [DataMember(Name = "value")] public string Value { get; set; } When we display that value on screen it is formatted to a specific number of decimal places. So the process we use is string - decimal then decimal - string. The application is currently undergoing final testing and is running on more than 100 machines, where this all works fine. However on one machine if the decimal value has a leading '1' then it is replaced by a zero. I added simple logging to the code so it looks like this: Log("Original string value: {0}", value); decimal val; if (decimal.TryParse(value, out val)) { Log("Parsed decimal value: {0}", val); string output = val.ToString(format, CultureInfo.InvariantCulture.NumberFormat); Log("Formatted string value: {0}", output); return output; } On my machine - any every other client machine - the logfile output is: Original string value: 1.010000 Parsed decimal value: 1.010000 Formatted string value: 1.01 On the defective machine the output is: Original string value: 1.010000 Parsed decimal value: 0.010000 Formatted string value: 0.01 So it would appear that the decimal.TryParse method is at fault. Things we've tried: Uninstalling and reinstalling the client application Uninstalling and reinstalling .net 3.5 sp1 Comparing the defective machine's regional settings for numbers (using English (United Kingdom)) to those of a working machine - no differences. Has anyone seen anything like this or has any suggestions? I'm quickly running out of ideas... While I was typing this some more info came in: Passing a string value of "10000" to Convert.ToInt32() returns 0, so that also seems to drop the leading 1.

    Read the article

  • How to mmap the stack for the clone() system call on linux?

    - by Joseph Garvin
    The clone() system call on Linux takes a parameter pointing to the stack for the new created thread to use. The obvious way to do this is to simply malloc some space and pass that, but then you have to be sure you've malloc'd as much stack space as that thread will ever use (hard to predict). I remembered that when using pthreads I didn't have to do this, so I was curious what it did instead. I came across this site which explains, "The best solution, used by the Linux pthreads implementation, is to use mmap to allocate memory, with flags specifying a region of memory which is allocated as it is used. This way, memory is allocated for the stack as it is needed, and a segmentation violation will occur if the system is unable to allocate additional memory." The only context I've ever heard mmap used in is for mapping files into memory, and indeed reading the mmap man page it takes a file descriptor. How can this be used for allocating a stack of dynamic length to give to clone()? Is that site just crazy? ;) In either case, doesn't the kernel need to know how to find a free bunch of memory for a new stack anyway, since that's something it has to do all the time as the user launches new processes? Why does a stack pointer even need to be specified in the first place if the kernel can already figure this out?

    Read the article

  • LinqToSql - ChangeConflictException. when submiting child and parent

    - by ari
    This problem drives me crazy. Here's the code using (BizNetDB db = new BizNetDB()) { var dbServiceCall = db.ServiceCalls.SingleOrDefault(x => x.ServiceCallID == serviceCallDetail.ServiceCallID); var dbServiceCallDetail = dbServiceCall.ServiceCallDetaills.SingleOrDefault(x=> x.ServiceCallDetailID == serviceCallDetail.ServiceCallDetailID); if (dbServiceCallDetail == null) { dbServiceCallDetail = new Data.ServiceCallDetaill(); dbServiceCall.ServiceCallDetaills.Add(dbServiceCallDetail); } dbServiceCallDetail.EndSession = serviceCallDetail.EndSession; dbServiceCallDetail.ExitTime = serviceCallDetail.ExitTime; dbServiceCallDetail.Solution = serviceCallDetail.Solution; dbServiceCallDetail.StartSession = serviceCallDetail.StartSession; serviceCallDetail.SessionMinutes = (serviceCallDetail.EndSession - serviceCallDetail.StartSession).Minutes; serviceCallDetail.DriveMinutes = serviceCallDetail.ExitTime.HasValue ? (serviceCallDetail.StartSession - serviceCallDetail.ExitTime.Value).Minutes : 0; var totalMinutes = (from d in db.ServiceCallDetaills .Where(x => x.ServiceCallID == serviceCallDetail.ServiceCallID && x.ServiceCallDetailID != dbServiceCallDetail.ServiceCallDetailID) group d by d.ServiceCallID into g select new { SessionMinutes = g.Sum(x => x.SessionMinutes), DriveMinutes = g.Sum(x => x.DriveMinutes) }).First(); dbServiceCall.SessionMinutes = totalMinutes.SessionMinutes + serviceCallDetail.SessionMinutes; dbServiceCall.DriveMinutes = totalMinutes.DriveMinutes + serviceCallDetail.DriveMinutes; try { db.SubmitChanges(); } catch (ChangeConflictException ex) { db.ChangeConflicts.ResolveAll(RefreshMode.OverwriteCurrentValues); db.SubmitChanges(); } The second Submit did solve the problem.. but I want to solve it from the root! when I disabled this lines (The parent changes): dbServiceCall.SessionMinutes = totalMinutes.SessionMinutes + serviceCallDetail.SessionMinutes; dbServiceCall.DriveMinutes = totalMinutes.DriveMinutes + serviceCallDetail.DriveMinutes; everithing is Ok. please help...

    Read the article

  • How do I set the default selection for NSTreeController at startup?

    - by John Gallagher
    The Background I've built a source list (similar to iTunes et al.) in my Cocoa app. I've got an NSOutlineView, with Value column bound to arrangedObjects.name key path of an NSTreeController. The NSTreeController accesses JGSourceListNode entities in a Core Data store. I have three subclasses of JGSourceListNode - JGProjectNode, JGGroupNode and JGFolderNode. I have selectedIndexPaths on NSTreeController bound to an NSArray called selectedIndexPaths in my App Delegate. On startup, I search for group nodes and if they're not found in the core data store I create them: if ([allGroupNodes count] == 0) { JGGroupNode *rootTrainingNode = [JGGroupNode insertInManagedObjectContext:context]; [rootTrainingNode setNodeName:@"TRAIN"]; JGProjectNode *childUntrainedNode = [JGProjectNode insertInManagedObjectContext:context]; [childUntrainedNode setParent:rootTrainingNode]; [childUntrainedNode setNodeName:@"Untrained"]; JGGroupNode *rootBrowsingNode = [JGGroupNode insertInManagedObjectContext:context]; [rootBrowsingNode setNodeName:@"BROWSE"]; JGFolderNode *childFolder = [JGFolderNode insertInManagedObjectContext:context]; [childFolder setNodeName:@"Folder"]; [childFolder setParent:rootBrowsingNode]; [context save:nil]; } What I Want When I start the app, I want both top level groups to be expanded and "Untrained" to be highlighted as shown: The Problem I put the following code in the applicationDidFinishLaunching: method of the app delegate: [sourceListOutlineView expandItem:[sourceListOutlineView itemAtRow:0]]; [sourceListOutlineView expandItem:[sourceListOutlineView itemAtRow:2]]; NSIndexPath *rootIndexPath = [NSIndexPath indexPathWithIndex:0]; NSIndexPath *childIndexPath = [rootIndexPath indexPathByAddingIndex:0]; [self setSelectedIndexPaths:[NSArray arrayWithObject:childIndexPath]]; but the outline view seems to not have been prepared yet, so this code does nothing. Ideally, eventually I want to save the last selection the user had made and restore this on a restart. The Question I'm sure it's possible using some crazy KVO to observe when the NSTreeController or NSOutlineView gets populated then expand the items and change the selection, but that feels clumsy and too much like a work around. How would I do this elegantly?

    Read the article

  • Dynamically inserting an image with JavaScript does not work on images that 302 redirect

    - by Samuel Clay
    I am dynamically inserting an image into an HTML document using jQuery. Here is the code: var image_url = "http://www.kottke.org/plus/misc/images/castro-pitching.jpg"; var $image = $('<img src="'+image_url+'" width="50" height="50" />'); $('body').prepend($image); Notice that the image http://www.kottke.org/plus/misc/images/castro-pitching.jpg is actually a 302 redirect to http://kottkegae.appspot.com/images/castro-pitching.jpg. If you were to go to the original image in your browser, it works fine. If you were to load an HTML page with that image in an img tag, it would load fine. However, if you were to insert it dynamically using jQuery (or JavaScript, for that matter), the browser will not show the 302'ed image. If you show the redirected image, it would work fine. var image_url = "http://kottkegae.appspot.com/images/castro-pitching.jpg"; var $image2 = $('<img src="'+image_url+'" width="50" height="50" />'); $('body').prepend($image2); That's crazy, right? What gives and how can I force the image to load when inserted dynamically?

    Read the article

  • emacs frustration with web development

    - by Tony Cruise
    I really liked flexibility of emacs but it is really annoying to make it work. I want to use it for web development html, css, javascript, php. I first tried emacs-starter-kit . It didn't included nXhtml. Also C-g key binding does not work (they call it starter kit but basic key command does not work). I think it is mapped for git control. That's a frustration for a beginner. Then I replaced emacs-starter-kit with nXhtml. At least C-g is working. But code completion sucks, M-tab does not work. I tried code completion from nXhtml menu with no success. Also NXhtml mode did'nt colorized my file if css is mixed with html. Isn't it recommended for mixed html, css,php files. So why it doesnt work?. Why Emacs folks do not aware of convention over configuration? Dam! ship it something works! Please help me before I am getting crazy. I use Ubuntu 10.04 and emacs-snaphot-gtk 23.1.50-1. Please guide step by step with your working dotfile url. Even I accept I am a dummy it really annoying and frustrating to use emacs.

    Read the article

  • Overriding MSBuildExtensionsPath in the MSBuild task is flaky

    - by Stuart Lange
    This is already cross-posted at MS Connect: https://connect.microsoft.com/VisualStudio/feedback/details/560451 I am attempting to override the property $(MSBuildExtensionsPath) when building a solution containing a C# web application project via msbuild. I am doing this because a web application csproj file imports the file "$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets". This file is installed by Visual Studio to the standard $(MSBuildExtensionsPath) location (C:\Program Files\MSBuild). I would like to eliminate the dependency on this file being installed on the machine (I would like to keep my build servers as "clean" as possible). In order to do this, I would like to include the Microsoft.WebApplication.targets in source control with my project, and then override $(MSBuildExtensionsPath) so that the csproj will import this included version of Microsoft.WebApplication.targets. This approach allows me to remove the dependency without requiring me to manually modify the web application csproj file. This scheme works fine when I build my solution file from the command line, supplying the custom value of $(MSBuildExtensionsPath) at the command line to msbuild via the /p flag. However, if I attempt to build the solution using the MSBuild task in a custom msbuild project file (overriding MSBuildExtensionsPath using the "Properties" attribute), it fails because the web app csproj file is attempting to import the Microsoft.WebApplication.targets from the "standard" Microsoft.WebApplication.targets location (C:\Program Files\MSBuild). Notably, if I run msbuild using the "Exec" task in my custom project file, it works. Even more notably, the FIRST time I run the build using the "MSBuild" task AFTER I have run the build using the "EXEC" task (or directly from the command line), the build works. Has anyone seen behavior like this before? Am I crazy? Is anyone aware of the root cause of this problem, a possible workaround, or whether this is a legitimate bug in MSBuild?

    Read the article

  • Using Regex, how can I remove certain characters from inside angle-brackets, leaving the characters

    - by Iain Fraser
    Edit: To be clear, please understand that I am not using Regex to parse the html, that's crazy talk! I'm simply wanting to clean up a messy string of html so it will parse Edit #2: I should also point out that the control character I'm using is a special unicode character - it's not something that would ever be used in a proper tag under any normal circumstances Suppose I have a string of html that contains a bunch of control characters and I want to remove the control characters from inside tags only, leaving the characters outside the tags alone. For example Here the control character is the numeral "1". Input The quick 1<strong>orange</strong> lemming <sp11a1n 1class1='jumpe111r'11>jumps over</span> 1the idle 1frog Desired Output The quick 1<strong>orange</strong> lemming <span class='jumper'>jumps over</span> 1the idle 1frog So far I can match tags which contain the control character but I can't remove them in one regex. I guess I could perform another regex on my matches, but I'd really like to know if there's a better way. My regex Bear in mind this one only matches tags which contain the control character. <(([^>])*?`([^>])*?)*?> Thanks very much for your time and consideration. Iain Fraser

    Read the article

  • C# LINQ to XML missing space character.

    - by Fossaw
    I write an XML file "by hand", (i.e. not with LINQ to XML), which sometimes includes an open/close tag containing a single space character. Upon viewing the resulting file, all appears correct, example below... <Item> <ItemNumber>3</ItemNumber> <English> </English> <Translation>Ignore this one. Do not remove.</Translation> </Item> ... the reasons for doing this are various and irrelevent, it is done. Later, I use a C# program with LINQ to XML to read the file back and extract the record... XElement X_EnglishE = null; // This is CRAZY foreach (XElement i in Records) { X_EnglishE = i.Element("English"); // There is only one damned record! } string X_English = X_EnglishE.ToString(); ... and test to make sure it is unchanged from the database record. I detect a change, when processing Items where the field had the single space character... +E+ Text[3] English source has been altered: Was: >>> <<< Now: >>><<< ... the and <<< parts I added to see what was happening, (hard to see space characters). I have fiddled around with this but can't see why this is so. It is not absolutely critical, as the field is not used, (yet), but I cannot trust C# or LINQ or whatever is doing this, if I do not understand why it is so. So what is doing that and why?

    Read the article

  • Receiving DB update events in .NET from SQLite

    - by Dan Tao
    I've recently discovered the awesomeness of SQLite, specifically the .NET wrapper for SQLite at http://sqlite.phxsoftware.com/. Now, suppose I'm developing software that will be running on multiple machines on the same network. Nothing crazy, probably only 5 or 6 machines. And each of these instances of the software will be accessing an SQLite database stored in a file in a shared directory (is this a bad idea? If so, tell me!). Is there a way for each instance of the app to be notifiied if one instance updates the database file? One obvious way would be to use the FileSystemWatcher class, read the entire database into a DataSet, and then ... you know ... enumerate through the entire thing to see what's new ... but yeah, that seems pretty idiotic, actually. Is there such a thing as a provider of SQLite updates? Does this even make sense as a question? I'm also pretty much a newbie when it comes to ADO.NET, so I might be approaching the problem from the entirely wrong angle.

    Read the article

  • d:DesignData issue, Visual Studio 2010 cant build after adding sample design data with Expression Bl

    - by Valko
    Hi, VS 2010 solution and Silverlight project builds fine, then: I open MyView.xaml view in Expression Blend 4 Add sample data from class (I use my class defined in the same project) after I add new sample design data with Expression blend 4, everything looks fine, you see the added sample data in the EB 4 fine, you also see the data in VS 2010 designer too. Close the EB 4, and next VS 2010 build is giving me this errors: Error 7 XAML Namespace http://schemas.microsoft.com/expression/blend/2008 is not resolved. C:\Code\source\...myview.xaml and: Error 12 Object reference not set to an instance of an object. ... TestSampleData.xaml when I open the TestSampleData.xaml I see that namespace for my class used to define sample data is not recognized. However this namespace and the class itself exist in the same project! If I remove the design data from the MyView.xaml: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" it builds fine and the namespace in TestSampleData.xaml is recognized this time?? and then if add: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" I again see in the VS 2010 designer sample data, but the next build fails and again I see studio cant find the namespace in my TestSampleData.xaml containing sample data. That cycle is driving me crazy. Am I missing something here, is it not possible to have your class defining sample design data in the same project you have the MyView.xaml view?? cheers Valko

    Read the article

  • Problem using custom HttpHandler to process requests for both .aspx and non-extension pages in IIS7

    - by Noel
    I am trying to process both ".aspx" and non-extension page requests (i.e. both contact.aspx and /contact/) using a custom HttpHandler in IIS7. My handler works just fine in either one case or the other, but as soon as I try to process both cases, it only works for one. Please see Handlers snippet from my web.config below: If i keep only mapping to "*.aspx" then all .aspx requests are processed correctly, but obviously extensionless requests won't work: <add name="AllPages.ASPX" path="*.aspx" verb="*" type="Test.PageHandlerFactory, Test" preCondition="" /> If i change the mapping to "*" then all extensionless requests are processed correctly, but ".aspx" requests that should still be handled by this handler stop working. Note that i added the StaticFiles entry in order to process files that are on disk like images, css, js, etc. <add name="WebResource" path="WebResource.axd" verb="GET" type="System.Web.Handlers.AssemblyResourceLoader" /> <add name="StaticFiles" verb="GET,HEAD" path="*.*" type="System.Web.StaticFileHandler" resourceType="File" /> <add name="AllPages" path="*" verb="*" type="Test.PageHandlerFactory, Test" preCondition="" /> The crazy thing is that when i load an ".aspx" request (with the 2nd configuration shown) IIS7 gives a 404 not found error. The error also says that the request is processed by the StaticFiles handler. But I made sure to add resourceType="File" to the StaticFileHandler in order to avoid this. According to MS this means the request is only for "physical files on disk". Am i misreading/interpreting the "on disk" part? My .aspx file isn't on disk, that's why i want to use the handler in the first place.

    Read the article

  • Make seems to think a prerequisite is an intermediate file, removes it

    - by James
    For starters, this exercise in GNU make was admittedly just that: an exercise rather than a practicality, since a simple bash script would have sufficed. However, it brought up interesting behavior I don't quite understand. I've written a seemingly simple Makefile to handle generation of SSL key/cert pairs as necessary for MySQL. My goal was for make <name> to result in <name>-key.pem, <name>-cert.pem, and any other necessary files (specifically, the CA pair if any of it is missing or needs updating, which leads into another interesting follow-up exercise of handling reverse deps to reissue any certs that had been signed by a missing/updated CA cert). After executing all rules as expected, make seems to be too aggressive at identifying intermediate files for removal; it removes a file I thought would be "safe" since it should have been generated as a prereq to the main rule I'm invoking. (Humbly translated, I likely have misinterpreted make's documented behavior to suit my expectation, but don't understand how. ;-) Edited (thanks, Chris!) Adding %-cert.pem to .PRECIOUS does, of course, prevent the deletion. (I had been using the wrong syntax.) Makefile: OPENSSL = /usr/bin/openssl # Corrected, thanks Chris! .PHONY: clean default: ca clean: rm -I *.pem %: %-key.pem %-cert.pem @# Placeholder (to make this implicit create a rule and not cancel one) Makefile: @# Prevent the catch-all from matching Makefile ca-cert.pem: ca-key.pem $(OPENSSL) req -new -x509 -nodes -days 1000 -key ca-key.pem $@ %-key.pem: $(OPENSSL) genrsa 2048 $@ %-cert.pem: %-csr.pem ca-cert.pem ca-key.pem $(OPENSSL) x509 -req -in $ $@ Output: $ make host1 /usr/bin/openssl genrsa 2048 ca-key.pem /usr/bin/openssl req -new -x509 -nodes -days 1000 -key ca-key.pem ca-cert.pem /usr/bin/openssl genrsa 2048 host1-key.pem /usr/bin/openssl req -new -days 1000 -nodes -key host1-key.pem host1-csr.pem /usr/bin/openssl x509 -req -in host1-csr.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 host1-cert.pem rm host1-csr.pem host1-cert.pem This is driving me crazy, and I'll happily try any suggestions and post results. If I'm just totally noobing out on this one, feel free to jibe away. You can't possibly hurt my feelings. :)

    Read the article

  • Ldap query returns null result when deployed.

    - by Trey Carroll
    I'm using a very simple Ldap query in my asp.net mvc 2.0 site: String ldapPath = ConfigReader.LdapPath; String emailAddress = null; try { DirectorySearcher search = new DirectorySearcher(ConfigReader.LdapPath); search.Filter = String.Format("(&(objectClass=user)(objectCategory=person)(objectSid={0})) ", securityIdentifierValue); // add the mail property to the list of props to retrieve search.PropertiesToLoad.Add("mail"); var result = search.FindOne(); if (result == null) { throw new Exception("Ldap Query with filter:" + search.Filter.ToString() + " returned a null value (no match found)"); } else { emailAddress = result.Properties["mail"][0].ToString(); } } catch (ArgumentOutOfRangeException aoorEx) { throw new Exception( "The query could not find an email for this user."); } catch (Exception ex) { //_log.Error(string.Format("======!!!!!! ERROR ERROR ERROR !!!!! in LdapLookupUtil.cs getEmailFromLdap Exception: {0}", ex)); throw ex; } return emailAddress; It works fine on my localhost machine. It works fine when I run it in VS2010 on the server. It always returns a null result when deployed. Here is my web.config: Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config -- section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. -- <!-- -- section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. -- I'm running it under the default app pool. Does anybody see the problem? This is driving me crazy!

    Read the article

  • Installing and using acts-as-taggable-on

    - by seaneshbaugh
    This is going to be a really dumb question, I just know it, but I'm going to ask anyways because it's driving me crazy. How do I get acts-as-taggable-on to work? I installed it as a gem with gem install acts-as-taggable-on because I can't ever seem to get installing plugins to work, but that's a whole other batch of questions that are all probably really dumb. Anyways, no problems there, it installed correctly. I did ruby script/generate acts_as_taggable_on_migration and rake db:migrate, again no problems. I added acts_as_taggable to the model I want to use tags with, started up the server and then loaded the index for the model just to see if what I've got so far is working and got the following error: undefined local variable or method `acts_as_taggable' for #. I figure that just means I need to do something like require 'acts-as-taggable-on' to my model's file because that's typically what's necessary for gems. So I did that hit refresh and got uninitialized constant ActiveRecord::VERSION. I'm not even going to pretend to begin to know what that means went wrong. Did I go wrong somewhere or there something else I need to do. The installation instructions seem to me like they just assume you generally know what you're doing and don't even begin to explain what to do when things go wrong.

    Read the article

  • What Happens to Commit Logs on a Branch After Merging?

    - by Levi Hackwith
    Scenario: Programmer creates a branch for project 'foo' called 'my_foo' at revision 5 Programmer makes multiple changes to multiple files as he works on the 'my_foo' feature. At the end of each major step, say adding several new functions to class, the programmer does an svn commit on the appropriate files therefore committing them to the branch After several weeks and many commits later (each commit having a commit log describing what he did), the programmer merges the branch back into the trunk: #Assume the following is being done from inside a working copy of the trunk: svn merge -r 5:15 file:///path/to/repo/branches/my_foo Hazzah! he's merged all his changes back into trunk! There's much rejoicing and drinking of Mountain Dew. Now let's say another programmer comes along a week later and updates their working copy from revision 5 to revision 15. "Wow", they say. "I wonder what's changed since revision 5". The programmer then does an svn status on their working copy and they get something like this: ------------------------------------------------------------------------ r15 | programmer1 | 2010-03-20 21:27:04 -0400 (Sat, 20 Mar 2010) | 1 line Merging Version 2.0 Changes into trunk ------------------------------------------------------------------------ r5 | programmer2 | 2010-02-15 10:59:55 -0500 (Mon, 15 Feb 2010) | 1 line Added assets/images/tumblr_icon.png to trunk What the heck happened to all the notes that the other programmer put in with all of his commits in his branch? Do those not get pulled over during a merge? Am I crazy or just forgetting something?

    Read the article

  • IE 6 bug? width: 987

    - by William
    I'm having a very weird issue in IE6. If I set a div container do the width of 987px it adds a spacing between the container and an absolute positioned element inside. Any other width works fine, it's just 987. Is there something I'm not seeing? Code to reproduce: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>IE6 Issue</title> <style type="text/css"> body { background-color: #000; } #c1 { width: 987px; background-color: #fff; } #c2 { border: #f00 1px solid; zoom: 1; position: relative; } #tl, #tr { background-color: #000; font-size: 0; line-height: 0; position: absolute; top: 0px; left: 0px; width: 4px; height: 6px; } #tr { left: auto; right: 0; } </style> </head> <body> <div id="c1"><div id="c2"><div id="tl"></div><div id="tr"></div>a</div> </div> </body> This is crazy.

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • Distributed development systems

    - by Nathan Adams
    I am interested in a system that allows for distributed development with an authentication piece. What do I mean by that? Ok so lets take SVN, SVN keeps track of revisions and doesn't care who submits, as long as you have the right to submit you can submit, really, to any part in the repository. Where does my system come into play? Being able to granulate access control and give a stackoverflow like feel to the environment. In the system I am describing we have 4 users Bob, Alice, Dan, Joe. Bob is a project managed, Alice and Dan are programmers under Bob and Joe is a random programmer on the internet who wants to help. Ideally in this system, Bob can commit any changes and won't require approval. Alice and Dan can commit to their branches, or a branch, but a commit to the trunk would need approval by Bob. This is where Joe comes in, wants to help, however, you just don't want to give him the keys to the kingdom just yet so to speak, so in my system you would setup a "low user" account. Any commits that Joe makes would need to be approved by Dan, Alice or both. However, in the system, Joe can build up "Karma" where after so many approved commits it would only need approval by one of the programmers, and then eventually no approval would be necessary. Does that make sense and do you know if a system like that exists? Or am I just crazy to even think such a system/environment would be possible?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >