Search Results

Search found 341 results on 14 pages for 'overwriting'.

Page 11/14 | < Previous Page | 7 8 9 10 11 12 13 14  | Next Page >

  • Migrating ASP.NET MVC 1.0 applications to ASP.NET MVC 2 RTM

    - by Eilon
    Note: ASP.NET MVC 2 RTM isn’t yet released! But this tool will help you get your ASP.NET MVC 1.0 applications ready for when it is! I have updated the MVC App Converter to convert projects from ASP.NET MVC 1.0 to ASP.NET MVC 2 RTM. This should be last the last major change to the MVC App Converter that I released previews of in the past several months. Download The app is a single executable: Download MvcAppConverter-MVC2RTM.zip (255 KB). Usage The only requirement for this tool is that you have .NET Framework 3.5 SP1 on the machine. You do not need to have Visual Studio or ASP.NET MVC installed (unless you want to open your project!). Even though the tool performs an automatic backup of your solution it is recommended that you perform a manual backup of your solution as well. To convert an ASP.NET MVC 1.0 project built with Visual Studio 2008 to an ASP.NET MVC 2 project in Visual Studio 2008 perform these steps: Launch the converter Select the solution Click the “Convert” button To convert an ASP.NET MVC 1.0 project built with Visual Studio 2008 to an ASP.NET MVC 2 project in Visual Studio 2010: Wait until Visual Studio 2010 is released (next month!) and it will have a built-in version of this tool that will run automatically when you open an ASP.NET MVC 1.0 project Perform the above steps, then open the project in Visual Studio 2010 and it will perform the remaining conversion steps What it can do Open up ASP.NET MVC 1.0 projects from Visual Studio 2008 (no other versions of ASP.NET MVC or Visual Studio are supported) Create a full backup of your solution’s folder For every VB or C# project that has a reference to System.Web.Mvc.dll it will (this includes ASP.NET MVC web application projects as well as ASP.NET MVC test projects): Update references to ASP.NET MVC 2 Add a reference to System.ComponentModel.DataAnnotations 3.5 (if not already present) For every VB or C# ASP.NET MVC Web Application it will: Change the project type to an ASP.NET MVC 2 project Update the root ~/web.config references to ASP.NET MVC 2 Update the root ~/web.config to have a binding redirect from ASP.NET MVC 1.0 to ASP.NET MVC 2 Update the ~/Views/web.config references to ASP.NET MVC 2 Add or update the JavaScript files (add jQuery, add jQuery.Validate, add Microsoft AJAX, add/update Microsoft MVC AJAX, add Microsoft MVC Validation adapter) Unknown project types or project types that have nothing to do with ASP.NET MVC will not be updated What it can’t do It cannot convert projects directly to Visual Studio 2010 or to .NET Framework 4. It can have issues if your solution contains projects that are not located under the solution directory. If you are using a source control system it might have problems overwriting files. It is recommended that before converting you check out all files from the source control system. It cannot change code in the application that might need to be changed due to breaking changes between ASP.NET MVC 1.0 and ASP.NET MVC 2. Feedback, Please! If you need to convert a project to ASP.NET MVC 2 please try out this application and hopefully you’re good to go. If you spot any bugs or features that don’t work leave a comment here and I will try to address these issues in an updated release.

    Read the article

  • Editing files without race conditions?

    - by user2569445
    I have a CSV file that needs to be edited by multiple processes at the same time. My question is, how can I do this without introducing race conditions? It's easy to write to the end of the file without race conditions by open(2)ing it in "a" (O_APPEND) mode and simply write to it. Things get more difficult when removing lines from the file. The easiest solution is to read the file into memory, make changes to it, and overwrite it back to the file. If another process writes to it after it is in memory, however, that new data will be lost upon overwriting. To further complicate matters, my platform does not support POSIX record locks, checking for file existence is a race condition waiting to happen, rename(2) replaces the destination file if it exists instead of failing, and editing files in-place leaves empty bytes in it unless the remaining bytes are shifted towards the beginning of the file. My idea for removing a line is this (in pseudocode): filename = "/home/user/somefile"; file = open(filename, "r"); tmp = open(filename+".tmp", "ax") || die("could not create tmp file"); //"a" is O_APPEND, "x" is O_EXCL|O_CREAT while(write(tmp, read(file)); //copy the $file to $file+".new" close(file); //edit tmp file unlink(filename) || die("could not unlink file"); file = open(filename, "wx") || die("another process must have written to the file after we copied it."); //"w" is overwrite, "x" is force file creation while(write(file, read(tmp))); //copy ".tmp" back to the original file unlink(filename+".tmp") || die("could not unlink tmp file"); Or would I be better off with a simple lock file? Appender process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "a"); write(file, "stuff"); close(file); close(lock); unlink(filename+".lock"); Editor process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "rw"); while(contents += read(file)); //edit "contents" write(file, contents); close(file); close(lock); unlink(filename+".lock"); Both of these rely on an additional file that will be left over if a process terminates before unlinking it, causing other processes to refuse to write to the original file. In my opinion, these problems are brought on by the fact that the OS allows multiple writable file descriptors to be opened on the same file at the same time, instead of failing if a writable file descriptor is already open. It seems that O_CREAT|O_EXCL is the closest thing to a real solution for preventing filesystem race conditions, aside from POSIX record locks. Another possible solution is to separate the file into multiple files and directories, so that more granular control can be gained over components (lines, fields) of the file using O_CREAT|O_EXCL. For example, "file/$id/$field" would contain the value of column $field of the line $id. It wouldn't be a CSV file anymore, but it might just work. Yes, I know I should be using a database for this as databases are built to handle these types of problems, but the program is relatively simple and I was hoping to avoid the overhead. So, would any of these patterns work? Is there a better way? Any insight into these kinds of problems would be appreciated.

    Read the article

  • Overwrite clean method in Django Custom Forms

    - by John
    Hi I have wrote a custom widget class AutoCompleteWidget(widgets.TextInput): """ widget to show an autocomplete box which returns a list on nodes available to be tagged """ def render(self, name, value, attrs=None): final_attrs = self.build_attrs(attrs, name=name) if not self.attrs.has_key('id'): final_attrs['id'] = 'id_%s' % name if not value: value = '[]' jquery = u""" <script type="text/javascript"> $("#%s").tokenInput('%s', { hintText: "Enter the word", noResultsText: "No results", prePopulate: %s, searchingText: "Searching..." }); $("body").focus(); </script> """ % (final_attrs['id'], reverse('ajax_autocomplete'), value) output = super(AutoTagWidget, self).render(name, "", attrs) return output + mark_safe(jquery) class MyForm(forms.Form): AutoComplete = forms.CharField(widget=AutoCompleteWidget) this widget uses a jquery function which autocompletes a word based on entries from the database. You can preset its initial values by setting prePopulate to a json string in the form ['name': 'some name', 'id': 'some id'] I do this by setting the inital value of the form field to this json string jquery_string = ['name': 'some name', 'id': 'some id'] form = MyForm(initial={'AutoComplete':jquery_string}) When submitting the form the the value of AutoComplete is returned as a comma seperated list of the selected ids e.g. 12,45,43,66 which if what I want. However if there is an error in the form, for example a required field has not been entered the value of the AutoComplete field is now 12,45,43,66 and not the json string which it requires. What is the best way to solve this. I was thinking about overwriting the clean method in the form class but I'm not sure how to find out if any other element has returned an error. e.g. if forms.errors form.cleaned_date['autocomplete'] = json string return form.cleaned_data Thanks

    Read the article

  • Reset or clear the UIView from UILabels

    - by Nicsoft
    Hello, I have created an UIView on which I have a number of labels and I'm also drawing some lines in the same UIView. Sometimes I update the lines that I'm drawing. Now, the problem I am having is that when I update the lines, they get drawn according to my wish. But the labels are overwriting themselves. This shouldn't have been a problem if it wasn't for that the position is changed about 1 pixel and that makes the text go blurry. I don't know how to remove the labels before they are redrawn. I do remove the labels from the superview and add them back when drawRect is called, but the SetNeedDisplay doesn't clear the screen before the graphic is updated, I guess (I think I read that SetNeedsDisplay/drawRect doesn't clear the screen, just updating the content. Couldn't find the text now while searching)? What is the pattern to use here, should I create a retangle with the size of the screen (or the area where the labels are) and fill it with the background colour, or is there any other way to clear or reset the UIView (I don't want to release and create the UIView again). The view is created in IB and associated with a custom UIView. In IB I add some buttons and other static labels. The above labels and graphics is created programatically. Any comments would be helpful! Thanks in advance!

    Read the article

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • How does Visual Studio decide the order in which stack variables should be allocated?

    - by Jason
    I'm trying to turn some of the programs in gera's Insecure Programming by example into client/server applications that could be used in capture the flag scenarios to teach exploit development. The problem I'm having is that I'm not sure how Visual Studio (I'm using 2005 Professional Edition) decides where to allocate variables on the stack. When I compile and run example 1: int main() { int cookie; char buf[80]; printf("buf: %08x cookie: %08x\n", &buf, &cookie); gets(buf); if (cookie == 0x41424344) printf("you win!\n"); } I get the following result: buf: 0012ff14 cookie: 0012ff64 buf starts at an address eighty bytes lower than cookie, and any four bytes that are copied in buf after the first eighty will appear in cookie. The problem I'm having is when I place this code in some other function. When I compile and run the following code, I get a different result: buf appears at an address greater than cookie's. void ClientSocketHandler(SOCKET cs){ int cookie; char buf[80]; char stringToSend[160]; int numBytesRecved; int totalNumBytes; sprintf(stringToSend,"buf: %08x cookie: %08x\n",&buf,&cookie); send(cs,stringToSend,strlen(stringToSend),NULL); The result is: buf: 0012fd00 cookie: 0012fcfc Now there is no way to set cookie to arbitrary data via overwriting buf. Is there any way to tell Visual Studio to allocate cookie before buf? Is there any way to tell beforehand how the variables will be allocated? Thanks, Jason

    Read the article

  • Bulk Copy from one server to another

    - by Joseph
    Hi All, I've one situation where I need to copy part of the data from one server to another. The table schema are exactly same. I need to move partial data from the source, which may or may not be available in the destination table. The solution I'm thinking is, use bcp to export data to a text(or .dat) file and then take that file to the destination as both are not accessible at the same time (Different network), then import the data to the destination. There are some conditions I need to satisfy. 1. I need to export only a list of data from the table, not whole. My client is going to give me IDs which needs to be moved from source to destination. I've around 3000 records in the master table, and same in the child tables too. What I expect is, only 300 records to be moved. 2. If the record exists in the destination, the client is going to instruct as whether to ignore or overwrite case to case. 90% of the time, we need to ignore the records without overwriting, but log the records in a log file. Please help me with the best approach. I thought of using BCP with query option to filter the data, but while importing, how do I bypass inserting the existing records? If I need to overwrite, how to do it? Thanks a lot in advance. ~Joseph

    Read the article

  • How to draw an overlay on a SurfaceView used by Camera on Android?

    - by Cristian Castiblanco
    I have a simple program that draws the preview of the Camera into a SurfaceView. What I'm trying to do is using the onPreviewFrame method, which is invoked each time a new frame is drawn into the SurfaceView, in order to execute the invalidate method which is supposed to invoke the onDraw method. In fact, the onDraw method is being invoked, but nothing there is being printed (I guess the camera preview is overwriting the text I'm trying to draw). This is a simplify version of the SurfaceView subclass I have: public class Superficie extends SurfaceView implements SurfaceHolder.Callback { SurfaceHolder mHolder; public Camera camera; Superficie(Context context) { super(context); mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceCreated(final SurfaceHolder holder) { camera = Camera.open(); try { camera.setPreviewDisplay(holder); camera.setPreviewCallback(new PreviewCallback() { public void onPreviewFrame(byte[] data, Camera arg1) { invalidar(); } }); } catch (IOException e) {} } public void invalidar(){ invalidate(); } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { Camera.Parameters parameters = camera.getParameters(); parameters.setPreviewSize(w, h); camera.setParameters(parameters); camera.startPreview(); } @Override public void draw(Canvas canvas) { super.draw(canvas); // nothing gets drawn :( Paint p = new Paint(Color.RED); canvas.drawText("PREVIEW", canvas.getWidth() / 2, canvas.getHeight() / 2, p); } }

    Read the article

  • DirectX Desktop

    - by Jonathan
    Hi. I'd like to make an animated desktop background for Windows 7 using DirectX. I'm using C#, SlimDX and a couple of P/Invoke imports of Windows API functions. I'm not brilliant with native Windows programming, but I've had a poke around online and I believe what I need to do is either: 1) Find the handle of the window containing the dekstop wallpaper, hook it up to a DirectX device and draw into it. 2) Make a new output window, and insert it above the desktop wallpaper but below the desktop icons. I've tried both these, but neither seems to work. If I navigate the Window heirarchy starting from the handle returned by GetDesktopWindow(), I can go Desktop - WorkerW - SHELLDLL_DefView - SysListView32. If I hook up a DirectX device to this handle, I can draw over the entire desktop, but it also covers the icons. If I create a Windows form, set its parent to SHELLDLL_DefView using SetParent() and then use SetWindowPos to play with its Z-order I can only seem to get it to go either behind the desktop wallpaper or in front of the desktop + icons. It looks as though the desktop wallpaper is background to the folder view containing the icons, and therefore what I am trying to do cannot work. The only solution then would be to not use the desktop for icons, or to find some alternative, e.g. overwriting the desktop then overlaying a transparent window containing a view of the contents of some folder. Does anyone have any idea of what I should be doing, or even whether what I want to do is possible? It seems you can draw to the desktop background using the GDI (as I believe the wxSnow program does), and I've seen something similar to what I want done by VLC Media Player under Windows XP with its DirectX wallpaper mode (interestingly, I can't seem to get this option enabled on my system). Thanks!

    Read the article

  • edited an .SWF, locally the changes made are okay, but when on server changes are not fully visible

    - by Andy
    Hi, I just edited an .SWF file I have on my website home page for a while now. I update the SWF frequently with new pictures. It's actually a picture slideshow, consisting out of 6 images. I used Adobe Flash CS4 Pro to edit the file, with just swapping all the pictures (JPGS) in it for other ones. I also have some small AS where I just have the URL: on(release) { getURL("link"); } so that's nothing fancy at all. I saved and published everything (CTRL+ENTER) and the .SWF played well, and tested it in IE8 and FF. Then I uploaded the SWF to my test server, overwriting the existing SWF file. Now the problem: all pictures but one show up well. Of the 6 images, the second image is actually the old image that was in its place. I downloaded the .SWF from the testserver and inspected the SWF and guess what: the old picture wasn't in it, instead the correct image was in the SWF. Even after reloading the page hitting CTRL+F5 still the wrong image shows. FF though shows the SWF correctly. So I then opened the page on another computer using IE8 and there the SWF works well, showing the correct second image. What's wrong with my first computer's browser? It's also the computer I edited the SWF with. I DO remember like I first saved and uploaded the wrong SWF (with the old 2nd image still in it) to the testserver, and later on uploading the correct one (proper 2nd image) I think IE8 has cached the wrong SWF, and now memorized it someway not willing to see that the file has actually been changed, but what to do so IE8 starts showing the correct SWF??

    Read the article

  • Source Control - XCode - Visual Studio 2005/2008 / 2010

    - by Mick Walker
    My apologies if this has been asked before, I wasnt quite sure if this question should be asked on a programming forum, as it more relates to programming environment than a particular technology, so please accept my (double) appologies if I am posting this in the wrong place, my logic in this case was if it effects the code I write, then this is the place for it. At home, I do a lot of my development on a Mac Pro, I do development for the Mac, iPhone and Windows on this machine (Xcode & Visual Studio - (multiple versions installed in bootcamp, but generally I run it via Parallels)). When visiting a client, I have a similar setup, but on my MacBook Pro. What I want is a source control solution to install on the Mac Pro, that will support both XCode and multiple versions of visual studio, so that when I visit a client, I can simply grab the latest copy from source control via the MacBook Pro. Whilst visiting the client, he / she may suggest changes, and minor ones I would tend to make on site, so I need the ability to merge any modified code back into the trunk of the project / solution when I return home. At the moment, I am using no source control at all, and rely on simply coping folders and overwriting them when I return from a client- thats my 'merge'!!! I was wondering if anyone had any ideas of a source provider I could use, which would support both Windows and Mac development environments, and is cheap (free would be better).

    Read the article

  • RESTful API design question - how should one allow users to create new resource instances?

    - by Tamás
    I'm working in a research group where we intend to publish implementations of some of the algorithms we develop on the web via a RESTful API. Most of these algorithms work on small to medium size datasets, and in many cases, a user of our services might want to run multiple queries (with different parameters) on the same dataset, so for me it seems reasonable to allow users to upload their datasets in advance and refer to them in their queries later. In this sense, a dataset could be a resource in my API, and an algorithm could be another. My question is: how should I let the users upload their own datasets? I cannot simply let users upload their data to /dataset/dataset_id as letting the users invent their own dataset_ids might result in ID collision and users overwriting each other's datasets by accident. (I believe one of the most frequently used dataset ID would be test). I think an ideal way would be to have a dedicated URL (like /dataset/upload) where users can POST their datasets and the response would contain a unique ID under which the dataset was stored, but I'm not sure that it does not violate the basic principles of REST. What is the preferred way of dealing with such scenarios?

    Read the article

  • Append a dynamically changing watermark to a PDF in SharePoint

    - by ccomet
    This is primarily a question of possibilities more than instructions. I'm a programming consultant working on a WSS project site system for my client. We have a document library in which files are uploaded to go through a complex approval process. With multiple stages in this process, we have an extra field which dictates what the current status of the document is. Now, my client has become enamored with the idea of PDF watermarking. He wants the document (which is already a PDF) to be affixed with a watermark corresponding to the current status, such that with each stage of the approval process the watermark will change. One method, the traditional method for PDF watermarking, of accomplishing this is to have one "clean" copy of the document somewhere hidden on the site, and create a new PDF from it that has the watermark at each stage of the approval process. Since the filename will never change, this new PDF can be uploaded continually to a public library, always overwriting the old version and simulating a "dynamically changing watermark". However, in the various stages there will also be people uploading clean copies with corrections and suggestions, nevermind the complex nature of juggling around two libraries and the fact we double the number of files stored. My client and I agree that this is not a practical path to choose. What we would like to do is be able to "modify" the watermark in a PDF, so that we only have to keep one copy of the file. Unfortunately, from what I've seen, in most cases when you make something like a watermark, which in its nature is supposed to be "unmodifyable", you won't be able to edit it later. So, is it possible to have a part of a PDF which cannot be changed by anyone who downloads the file, but can be changed as part of a workflow or other object model process? Thanks in advance!

    Read the article

  • How to get around the jslint error 'Don't make functions within a loop.'

    - by Ernelli
    I am working on making all of our JS code pass through jslint, sometimes with a lot of tweaking with the options to get legacy code pass for now on with the intention to fix it properly later. There is one thing that jslint complains about that I do not have a workround for. That is when using constructs like this, we get the error 'Don't make functions within a loop.' for (prop in newObject) { // Check if we're overwriting an existing function if (typeof newObject[prop] === "function" && typeof _super[prop] === "function" && fnTest.test(newObject[prop])) { prototype[prop] = (function (name, func) { return function () { var result, old_super; old_super = this._super; this._super = _super[name]; result = func.apply(this, arguments); this._super = old_super; return result; }; })(prop, newObject[prop]); } This loop is part of a JS implementation of classical inheritance where classes that extend existing classes retain the super property of the extended class when invoking a member of the extended class. Just to clarify, the implementation above is inspired by this blog post by John Resig. But we also have other instances of functions created within a loop. The only workaround so far is to exclude these JS files from jslint, but we would like to use jslint for code validation and syntax checking as part of our continuous integration and build workflow. Is there a better way to implement functionality like this or is there a way to tweak code like this through jslint?

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • How to remove the file suffix/extension (.jsp and .action) using the Stripes Framework?

    - by Dolph Mathews
    I'm looking to use pretty / clean URL's in my web app. I would like the following URL: http://mydomain.com/myapp/calculator .. to resolve to: com.mydomain.myapp.action.CalculatorActionBean I tried overwriting the NameBasedActionResolver with: public class CustomActionResolver extends NameBasedActionResolver { public static final String DEFAULT_BINDING_SUFFIX = "."; @Override protected String getBindingSuffix() { return DEFAULT_BINDING_SUFFIX; } @Override protected List<String> getActionBeanSuffixes() { List<String> suffixes = new ArrayList<String>(super.getActionBeanSuffixes()); suffixes.add(DEFAULT_BINDING_SUFFIX); return suffixes; } } And adding this to web.xml: <servlet-mapping> <servlet-name>StripesDispatcher</servlet-name> <url-pattern>*.</url-pattern> </servlet-mapping> Which gets me to: http://mydomain.com/myapp/Calculator. But: A stray "." is still neither pretty nor clean. The class name is still capitalized in the URL..? That still leaves me with *.jsp..? Is it even possible to get rid of both .action and .jsp?

    Read the article

  • How can I sync files in two different git repositories (not clones) and maintain history?

    - by brian d foy
    I maintain two different git repos that need to share some files, and I'd like the commits in one repo to show up in the other. What's a good way to do that for ongoing maintenance? I've been one of the maintainers of the perlfaq (Github), and recently I fell into the role of maintaining the Perl core documentation, which is also in git. Long before I started maintaining the perlfaq, it lived in a separate source control repository. I recently converted that to git. Periodically, one of the perl5-porters would sync the shared files in the perlfaq repo and the perl repo. Since we've switched to git, we'e been a bit lazy converting the tools, and I'm now the one who does that. For the time being, the two repos are going to stay separate. Currently, to sync the FAQ for a new (monthly) release of perl, I'm almost ashamed to say that I merely copy the perlfaq*.pod files in the perlfaq repo and overlay them in the perl repo. That loses history, etc. Additionally, sometimes someone makes a change to those files in the perl repo and I end up overwriting it (yes, check git diff you idiot!). The files do not have the same paths in the repo, but that's something that I could change, I think. What I'd like to do, in the magical universe of rainbows and ponies, is pull the objects from the perlfaq repo and apply them in the perl repo, and vice-versa, so the history and commit ids correspond in each. Creating patches works, but it's also a lot work to manage it Git submodules seem to only work to pull in the entire external repo I haven't found something like svn's file externals, but that would work in both directions anyway I'd love to just fetch objects from one and cherry-pick them in the other What's a good way to manage this?

    Read the article

  • How do you clear a CustomValidator Error on a Button click event?

    - by George
    I have a composite User control for entering dates: The CustomValidator will include server sided validation code. I would like the error message to be cleared via client sided script if the user alters teh date value in any way. To do this, I included the following code to hook up the two drop downs and the year text box to the validation control: <script type="text/javascript"> ValidatorHookupControlID("<%= ddlMonth.ClientID%>", document.getElementById("<%= CustomValidator1.ClientID%>")); ValidatorHookupControlID("<%= ddlDate.ClientID%>", document.getElementById("<%= CustomValidator1.ClientID%>")); ValidatorHookupControlID("<%= txtYear.ClientID%>", document.getElementById("<%= CustomValidator1.ClientID%>")); </script> However, I would also like the Validation error to be cleared when the user clicks the clear button. When the user clicks the Clear button, the other 3 controls are reset. To avoid a Post back, the Clear button is a regular HTML button with an OnClick event that resets the 3 controls. Unfortunately, the ValidatorHookupControlID method does not seem to work on HTML controls, so I thought to change the HTML Button to an ASP button and to Hookup to that control instead. However, I cannot seem to eliminate the Postback functionality associated by default with the ASP button control. I tried to set the UseSubmitBehavior to False, but it still submits. I tried to return false in my btnClear_OnClick client code, but the code sent to the browser included a DoPostback call after my call. btnClear.Attributes.Add("onClick", "btnClear_OnClick();") Instead of adding OnClick code, I tried overwriting it, but the DoPostBack code was still included in the final code that was sent to the browser. What do I have to do to get the Clear button to clear the CustomValidator error when clicked and avoid a postback? btnClear.Attributes.Item("onClick") = "btnClear_OnClick();"

    Read the article

  • Simulating O_NOFOLLOW (2): Is this other approach safe?

    - by Daniel Trebbien
    As a follow-up question to this one, I thought of another approach which builds off of @caf's answer for the case where I want to append to file name and create it if it does not exist. Here is what I came up with: Create a temporary directory with mode 0700 in a system temporary directory on the same filesystem as file name. Create an empty, temporary, regular file (temp_name) in the temporary directory (only serves as placeholder). Open file name for reading only, just to create it if it does not exist. The OS may follow name if it is a symbolic link; I don't care at this point. Make a hard link to name at temp_name (overwriting the placeholder file). If the link call fails, then exit. (Maybe someone has come along and removed the file at name, who knows?) Use lstat on temp_name (now a hard link). If S_ISLNK(lst.st_mode), then exit. open temp_name for writing, append (O_WRONLY | O_APPEND). Write everything out. Close the file descriptor. unlink the hard link. Remove the temporary directory. (All of this, by the way, is for an open source project that I am working on. You can view the source of my implementation of this approach here.) Is this procedure safe against symbolic link attacks? For example, is it possible for a malicious process to ensure that the inode for name represents a regular file for the duration of the lstat check, then make the inode a symbolic link with the temp_name hard link now pointing to the new, symbolic link? I am assuming that a malicious process cannot affect temp_name.

    Read the article

  • How can I filter a Perl DBIx recordset with 2 conditions on the same column?

    - by BrianH
    I'm getting my feet wet in DBIx::Class - loving it so far. One problem I am running into is that I want to query records, filtering out records that aren't in a certain date range. It took me a while to find out how to do a "<=" type of match instead of an equality match: my $start_criteria = ">= $start_date"; my $end_criteria = "<= $end_date"; my $result = $schema->resultset('MyTable')->search( { 'status_date' => \$start_criteria, 'status_date' => \$end_criteria, }); The obvious problem with this is that since the filters are in a hash, I am overwriting the value for "status_date", and am only searching where the status_date <= $end_date. The SQL that gets executed is: SELECT me.* from MyTable me where status_date <= '9999-12-31' I've searched CPAN, Google and SO and haven't been able to figure out how to apply 2 conditions to the same column. All documentation I've been able to find shows how to filter on more than 1 column, but not 2 conditions on the same column. I'm sure I'm missing something obvious - hoping someone here can point it out to me? Thanks in advance! Brian

    Read the article

  • Append to a webpage in javascript

    - by Lily
    What I want to do is that: a webpage with continuously updating content. (In my case is updating every 2s) New content is appended to the old one instead of overwriting the old one. Here is the code I have: var msg_list = new Array( "<message>Hello, Clare</message>", "<message>Hello,Lily</message>", "<message>Hello, Kevin</message>", "<message>Hello, Bill</message>" ); var number = 0; function send_msg() { document.write(number + " " + msg_list[number%4]+'<br/>'); number = number + 1; } var my_interval = setInterval('send_msg()', 2000); However, in both IE and Firefox, only one line is printed out, and the page will not be updated anymore. Interestingly in Chrome, the lines being printed out continuously, which is what I am looking for. I know that document.write() is called when the page is loaded according to this. So it's definitely not the way to update the webpage continuously. What will be the best way to achieve what I want to do? Totally newbie in Javascript. Thank you. Lily

    Read the article

  • How can I create objects based on dump file memory in a WinDbg extension?

    - by pj4533
    I work on a large application, and frequently use WinDbg to diagnose issues based on a DMP file from a customer. I have written a few small extensions for WinDbg that have proved very useful for pulling bits of information out of DMP files. In my extension code I find myself dereferencing c++ class objects in the same way, over and over, by hand. For example: Address = GetExpression("somemodule!somesymbol"); ReadMemory(Address, &addressOfPtr, sizeof(addressOfPtr), &cb); // get the actual address ReadMemory(addressOfObj, &addressOfObj, sizeof(addressOfObj), &cb); ULONG offset; ULONG addressOfField; GetFieldOffset("somemodule!somesymbolclass", "somefield", &offset); ReadMemory(addressOfObj+offset, &addressOfField, sizeof(addressOfField), &cb); That works well, but as I have written more extensions, with greater functionality (and accessing more complicated objects in our applications DMP files), I have longed for a better solution. I have access to the source of our own application of course, so I figure there should be a way to copy an object out of a DMP file and use that memory to create an actual object in the debugger extension that I can call functions on (by linking in dlls from our application). This would save me the trouble of pulling things out of the DMP by hand. Is this even possible? I tried obvious things like creating a new object in the extension, then overwriting it with a big ReadMemory directly from the DMP file. This seemed to put the data in the right fields, but freaked out when I tried to call a function. I figure I am missing something...maybe c++ pulls some vtable funky-ness that I don't know about? My code looks similar to this: SomeClass* thisClass = SomeClass::New(); ReadMemory(addressOfObj, &(*thisClass), sizeof(*thisClass), &cb);

    Read the article

  • Design problem with callback functions in android

    - by Franz Xaver
    Hi folks! I'm currently developing an app in android that is accessing wifi values, that is, the application needs to scan for all access point and their specific signal strengths. I know that I have to extend the class BroadcastReceiver overwriting the method BroadcastReceiver.onReceive(Context context, Intent intent) which is called when the values are ready. Perhaps there exist solutions provided by the android system itself but I'm relatively new to android so I could need some help. The problem I encountered is that I got one class (an activity, thus controlled by the user) that needs this scan results for two different things (first to save the values in a database or second, to use them for further calculations but not both at one moment!) So how to design the callback system in order to "transport" the scan results from onReceive(Context context, Intent intent) to the operation intended by the user? My first solution was to define enums for each use case (save or use for calculations) which wlan-interested classes have to submit when querying for the values. But that would force the BroadcastReceiverextending class to save the current enum and use it as a parameter in the callback function of the querying class (this querying class needs to know what it asked for when getting backcalled) But that seems to me kind of dirty ;) So anyone a good idea for this?

    Read the article

  • Returning searched results in an array in Java without ArrayList

    - by Crystal
    I started down this path of implementing a simple search in an array for a hw assignment without knowing we could use ArrayList. I realized it had some bugs in it and figured I'd still try to know what my bug is before using ArrayList. I basically have a class where I can add, remove, or search from an array. public class AcmeLoanManager { public void addLoan(Loan h) { int loanId = h.getLoanId(); loanArray[loanId - 1] = h; } public Loan[] getAllLoans() { return loanArray; } public Loan[] findLoans(Person p) { //Loan[] searchedLoanArray = new Loan[10]; // create new array to hold searched values searchedLoanArray = this.getAllLoans(); // fill new array with all values // Looks through only valid array values, and if Person p does not match using Person.equals() // sets that value to null. for (int i = 0; i < searchedLoanArray.length; i++) { if (searchedLoanArray[i] != null) { if (!(searchedLoanArray[i].getClient().equals(p))) { searchedLoanArray[i] = null; } } } return searchedLoanArray; } public void removeLoan(int loanId) { loanArray[loanId - 1] = null; } private Loan[] loanArray = new Loan[10]; private Loan[] searchedLoanArray = new Loan[10]; // separate array to hold values returned from search } When testing this, I thought it worked, but I think I am overwriting my member variable after I do a search. I initially thought that I could create a new Loan[] in the method and return that, but that didn't seem to work. Then I thought I could have two arrays. One that would not change, and the other just for the searched values. But I think I am not understanding something, like shallow vs deep copying???....

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14  | Next Page >