Search Results

Search found 9332 results on 374 pages for 'an original alias'.

Page 98/374 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Hudson: how do i use a parameterized build to do svn checkout and svn tag?

    - by Derick Bailey
    I'm setting up a parameterized build in hudson v1.362. the parameter i'm creting is used to determine which branch to checkout in subversion. I can set my svn repository url like this: https://my.svn.server/branches/${branch} and it does the checkout and the build just fine. now I want to tag the build after it finishes. i'm using the SVN tagging plugin for hudson to do this. so i go to the bottom of the project config screen for the hudson project and turn on "Perform Subversion tagging on successful build". here, i set my Tag Base URL to https://my.svn.server/tags/${branch}-${BUILD_NUMBER} and it gives me errors about those properties not being found. so i change them to environment variable usages like this: https://my.svn.server/tags/${env['branch']}-${env['BUILD_NUMBER']} and the svn tagging plugin is happy. the problem now is that my svn repository at the top is using the ${branch} syntax and the svn tagging plugin barfs on this: moduleLocation: Remote -https://my.svn.server/branches/$branch/ Tag Base URL: 'https://my.svn.server/tags/thebranchiused-1234'. There was no old tag at https://my.svn.server/tags/thebranchiused-1234. ERROR: Publisher hudson.plugins.svn_tag.SvnTagPublisher aborted due to exception java.lang.NullPointerException at hudson.plugins.svn_tag.SvnTagPlugin.perform(SvnTagPlugin.java:180) at hudson.plugins.svn_tag.SvnTagPublisher.perform(SvnTagPublisher.java:79) at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:36) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:580) at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:558) at hudson.model.Build$RunnerImpl.cleanUp(Build.java:167) at hudson.model.Run.run(Run.java:1295) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) Finished: FAILURE notice the first line, there: the svn tag is looking at ${branch} as part of the repository url... it's not parsing out the property value. i tried to change my original Repository URL for svn to use the ${env['branch']} syntax, but that blows up on the original checkout because this syntax is not getting parsed at all by the checkout. help?! how do i use a parameterized build to set the svn url for checkout and for tagging my build?!

    Read the article

  • configure.in: AM_DISABLE_SHARED doesn't change my Makefile

    - by t2k32316
    I'm extremely new to using Makefiles and autoconf. I'm using the Camellia image library and trying to statically link my code against their libraries. When I run "make" on the Camellia image library, I get libCamellia.a, .so, .la, and .so.0.0.0 files inside my /usr/local/lib directory. This is the command I use to compile my code with their libraries: gcc -L/usr/local/lib -lCamellia -o myprogram myprogram.c This works fine, but when I try to statically link, this is what I get: gcc -static -L/usr/local/lib -lCamellia -o myprogram myprogram.c /tmp/cck0pw70.o: In function `main': myprogram.c:(.text+0x23): undefined reference to `camLoadPGM' myprogram.c:(.text+0x55): undefined reference to `camAllocateImage' myprogram.c:(.text+0x97): undefined reference to `camZoom2x' myprogram.c:(.text+0x104): undefined reference to `camSavePGM' collect2: ld returned 1 exit status I want to statically link because I'm trying to modify the Camellia source code and I want to compare my version against theirs. So after some googling, I tried adding AM_DISABLE_SHARED into the configure.in file. But after running ./configure, I still get the exact same Makefile. After I "make install", I still get the same results above. What is an easy way to get two versions of my code, one with the original Camellia source code compiled and one with my modified version? I think static libraries should work. There is an easy way to get static libraries working or are there other simple solutions to my problem? I just don't want to re-"make" and re-"make install" everytime I want to compare my version against the original.

    Read the article

  • How can I terminate a system command with alarm in Perl?

    - by rockyurock
    I am running the below code snippet on Windows. The server starts listening continuously after reading from client. I want to terminate this command after a time period. If I use alarm() function call within main.pl, then it terminates the whole Perl program (here main.pl), so I called this system command by placing it in a separate Perl file and calling this Perl file (alarm.pl) in the original Perl File using the system command. But in this way I was unable to take the output of this system() call neither in the original Perl File nor in called one Perl File. Could anybody please let me know the way to terminate a system() call or take the output in that way I used above? main.pl my @output = system("alarm.pl"); print"one iperf completed\n"; open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; alarm.pl alarm 30; my @output_1 = readpipe("adb shell cd /data/app; ./iperf -u -s -p 5001"); open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; In both ways display.txt is always empty.

    Read the article

  • How to build n-layered web architecture with PHP?

    - by Alex
    I have a description and design of a website and I need to redesign it to allow for new requirements. The website's purpose is the offering of government's contracts and bidding opportunities for different businesses.I'm dealing with the 3-tier architecture PHP website comprising of the user-interface tier(client's web browser),business logic layer(Apache web server with PHP engine in it and a couple of applications running within a web server as well) and a database layer(local mysql database). Now,i need to redesign it to su???rt distributed n-tier architecture and specify how I would go about it.After long hours of research i came to this solution: business logic should be separated into presentation and purely business logic tier to allow for n-layer architecture(user-interface,presentation tier,b.logic and data tier).I have decided to use ??? just for the presentation(since the original existing website is in PHP) and use it within apache web server.In the business logic i want to use J2?? implementation technology instead of implementing it in PHP(i.e using Zend app.server and smarty template) cz J2EE can provide much more essential container services which are essential for business logic,its robustness,maintainability and different critical business operations which will be carried out by the g?v?rnment's website.So,particularly,i want to use J??ss app.server with ?J? business objects in it which would provide all the b.logic in java and would interact with the database and so forth.In order to connect PHP on a web server with java on app.server i'm gonna use PHP/Java bridge API (or maybe Quercus or SOAP is better?).Finally,i have my data tier with mysql which will communicate with b.logic via JD??.Payment system application in ???.server is gonna use S??? to talk with credit card company. From your professional point of view,does it sound like a good way of redesigning the original website to allow for n-tier architecture considering the specifics of the website and the criticality of its operations?(payment system is included in it)or u would personally prefer to use PHP business objects for business logic as well instead of J2EE?If you have any wiser recommendation or some alternative,please let me know what is right or wrong in my current solution. H??? to hear your professional advice very s??n (I'm new to the area of web development) Thanks in advance

    Read the article

  • Running multiple instances of tomcat in eclipse WTP

    - by lisak
    Hey, SCENARIO: 10 CATALINA_BASEs with own configuration (always the same port numbers 8080, but 10 different IP/hostnames on one host via virtual IPs). created a server in WTP and pick "Use the custom location" option in the server configuration in eclipse. New configuration files are created in workspace/Server/server-name-config/ Set up the server path and deploy path for my catalina base (not the internal .metadata one) After I started it, the new configuration files overwrote the original catalina-base/conf files I had there - I was glad, it should be like this but after I made changes in the eclipse config files workspace/Server/server-name-config/ and restarted the server, the changes didn't appeared in the original files in CATALINA_BASE/conf What the hell is that ? So I set the CATALINA_BASE/conf/server.xml to fault configuration and restarted tomcat from eclipse and it worked ! it took the configuration from /Server/server-name-config/server.xml Then I deleted CATALINA_BASE/conf/server.xml and it said that there is no server.xml in catalina base ! How is it possible ? I don't understand why eclipse WTP developers made so tight integration. There should by just symbolic links in /Server/server-name-config/ pointing to CATALINA_BASE/conf/ ... now there is a weird system which is totally unpredictable. The changes in /Server/server-name-config/ are not reflected in CATALINA_BASE/conf ... from where the standard bootstrap.jar or other catalina classloaders and classes build server, engine and other objects with particular setting. Moreover the CATALINA_BASEs could be used outside eclipse then. The second problem, I'm setting up various things in CATALINA_BASE/bin/startup.sh and setenv.sh which is easy cause I can use bash for it. Is then modifying VM parameters in the "Open launch configuration" settings the only way how to do it in eclipse ? Sorry for such a huge pile of questions, but I'm annoyed by the fact that it is much better to not use eclipse WTP for this because it is very poorly designed and it's a shame because this would spare me a lot of time. And using the internal .metadata/ instances it's even more terrifying way that the one I described.

    Read the article

  • Regex For Finding Ctypes with Int32

    - by Stefan H
    (Hey all, I am looking for a little regex help... I am trying to find all CType(expression,Int32) s and replace them with CInt(expression) This, however, is proving quite difficult, considering there could be a nested Ctype(expression, Int32) within the regex match. Does anyone have any ideas for how to best go about doing this? Here is what I have now: Dim str As String = "CType((original.Width * CType((targetSize / CType(original.Height, Single)), Single)), Int32)" Dim exp As New Regex("CType\((.+), Int32\)") str = exp.Replace(str, "CInt($1)") But this will match the entire string and replace it. I was thinking of doing a recursive function to find the outer most match, and then work inwards, but that still presents a problem with things like CType(replaceChars(I), Int32)), Chr(CType(replacementChars(I), Int32) Any tips would be appreciated. Input returnString.Replace(Chr(CType(replaceChars(I), Int32)), Chr(CType(replacementChars(I), Int32))) Output: returnString.Replace(Chr(CInt(replaceChars(I))),Chr(CInt(replacementChars(I)))) Edit: Been working on it a little more and have a recursive function that I'm still working out the kinks in. Recursion + regex. it kinda hurts. Private Function FindReplaceCInts(ByVal strAs As String) As String System.Console.WriteLine(String.Format("Testing : {0}", strAs)) Dim exp As New Regex("CType\((.+), Int32\)") If exp.Match(strAs).Success Then For Each match As Match In exp.Matches(strAs) If exp.Match(match.Value.Substring(2)).Success Then Dim replaceT As String = match.Value.Substring(2) Dim Witht As String = FindReplaceCInts(match.Value.Substring(2)) System.Console.WriteLine(strAs.IndexOf(replaceT)) strAs.Replace(replaceT, Witht) End If Next strAs = exp.Replace(strAs, "CInt($1)") End If Return strAs End Function Cheers,

    Read the article

  • Help with editing data in entity framework.

    - by Levelbit
    Title of this question is simple because there is no an easy explanation for what I'm trying to ask you. I hope you'll understand in the end :). I have to tables in my database: Company and Location (relationship:one to many) and corresponding entity sets. In my wpf application I have some datagrid which I want to fill with locations and to be able to edit every row in separate window as some form of details view (so I don't want to edit my data in datagrid). I did this by accessing Location entity from selected row and creating a new Location entity and then I copy properties from original entity to newly created. Something like cloning the object. After editing if I press OK changed data is copied to original object back, and if I press Cancel nothing happens. Of course, you probably thinking I could use NoTracking option and AttachAsModified method as was mentioned as solution in some earlier questions(see:http://stackoverflow.com/questions/803022/changing-entities-in-the-entityframework) by Alex James, but lets say I had some problems about that and I have my own reasons for doing this. Finally, because navigation property(Company) of my location entity is assigned to newly created location object(during cloning) from some reason in object context additional object as copy of object I want to edit from datagrid is created without my will(similar problem :http://blogs.msdn.com/alexj/archive/2009/12/02/tip-47-how-fix-up-can-make-it-hard-to-change-relationships.aspx). So, when I do ObjectContext.SaveChanges it inserts additional row of data into my database table Location, the same as the one i wanted to edit. I read about sth like this, but I don't quite understand why is that and how to block or override this. I hope I was clear as I could. Please, solutions or some other ideas.

    Read the article

  • Perl: Exporting variables in a subclass

    - by Jonathan
    I have a base class like this: package MyClass; use vars qw/$ME list of vars/; use Exporter; @ISA = qw/Exporter/; @EXPORT_OK = qw/ many variables & functions/; %EXPORT_TAGS = (all => \@EXPORT_OK ); sub my_method { } sub other_methods etc { } --- more code--- I want to subclass MyClass, but only for one method. package MySubclass; use MyClass; use vars qw/@ISA/; @ISA = 'MyClass'; sub my_method { --- new method } And I want to call this MySubclass like I would the original MyClass, and still have access to all of the variables and functions from exporter. However I am having problems getting the Exporter variables from the original class, MyClass, to export correctly. Do I need to run exporter again inside the subclass? That seems redundant and unclear. Example file: #!/usr/bin/perl use MySubclass /$ME/; -- rest of code But I get compile errors when I try to import the $ME variable. Any suggestions?

    Read the article

  • VLC desktop streaming

    - by StackedCrooked
    Edit I stopped using VLC and switched to GMax FLV Encoder. It does a much better job IMO. Original post I am sending my desktop (screen) as an H264 video stream to another machine that saves it to a file using the follwoing command lines: Sender of the stream: vlc -I dummy --sout='#transcode{vcodec=h264,vb=512,scale=0.5} :rtp{mux=ts,dst=192.168.0.1,port=4444}' Receiver of the stream: vlc -I rc rtp://@:4444 --sout='#std{access=file,mux=ps,dst=/home/user/output.mp4}' --ipv4 This works, but there are a few issues: The file is not playable with most players. VLC is able to playback the file but with some weirdness: = it takes about 10 seconds before the playback actually begins. = seeking doesn't work. Can someone point me in the right direction on how to fix these issues? EDIT: I made a little progress. The initial delay in playback is because the player is waiting for a keyframe. By forcing the sender of the stream to create a new key-frame every 4 seconds I could decrease the delay: :screen-fps=10 --sout='#transcode{vcodec=h264,venc=x264{keyint=40},vb=512,scale=0.5} :rtp{mux=ts,dst=192.168.0.1,port=4444}' The seeking problem is not solved however, but I understand it a little better. The RTP stream is saved as a file in its original streaming format, which is normally not playable as a regular video file. VLC manages to play this file, but most other players don't. So I need to convert it to a regular video file. I am currently investigating whether I can do this with ffmpeg if I provide it with an SDP file for the recorded stream. All help is welcome!

    Read the article

  • PHP: Problem with filesize

    - by BlaM
    I need a little help here: I get a file from an HTML upload form. And I have a "target" filename in $File. When I do this: copy($_FILES['binfile']['tmp_name'], $File); echo '<hr>' . filesize($_FILES['binfile']['tmp_name']); echo '<hr>' . filesize($File); Everything works fine. I get the same number twice. However when I delete the first call of filesize(), I get "0" (zero). copy($_FILES['binfile']['tmp_name'], $File); echo '<hr>' . filesize($File); Any suggestions? What am I doing wrong? Why do I need to get the filesize of the "original" file before I can get the size of the copy? (That's actually what it is: I need to call the filesize() for the original file. Neither sleep() nor calling filesize() of another file helps.) System: Apache 2.0 PHP 5.2.6 Debian Linux (Lenny)

    Read the article

  • Get the parent id..

    - by tixrus
    I have a bunch of elements like the following: <div class="droppableP" id="s-NSW" style="width:78px; height:63px; position: absolute; top: 223px; left: 532px;"> </div> They all have class droppableP but different id's obviously and I would like to factor the code in this script I am hacking on. The original script just has a specific selector for each of one of these divs, but the code is all alike except for the id it does things to, which is either the id of the parent or another div with a name that's related to it. Here is the original code specifically for this div: $("#s-NSW > .sensible").droppable( { accept : "#i-NSW", tolerance : 'intersect', activeClass : 'droppable-active', hoverClass : 'droppable-hover', drop : function() { $('#s-NSW').addClass('s-NSW'); $('#s-NSW').addClass('encastrada'); //can't move any more.. $('#i-NSW').remove(); $('#s-NSW').animate( { opacity: 0.25 },200, 'linear'); checkWin(); } }); Here is how I would like to factor so the same code can do all of them and I will eventually do chaining as well and maybe get rid of the inline styles but here is my first go: $(".droppableP > .sensible").droppable( { accept : "#i" + $(this).parent().attr('id').substring(2), tolerance : 'intersect', activeClass : 'droppable-active', hoverClass : 'droppable-hover', drop : function() { $(this).parent().addClass($(this).parent().attr('id')); $(this).parent().addClass('encastrada'); $("#i" + ($this).parent().attr('id').substring(2)).remove(); $(this).parent().animate( { opacity: 0.25 },200, 'linear'); checkWin(); } }); The error I get is $(this).parent().attr("id") is undefined Many thanks. I have browsed related questions the one I understand that's closest to mine, turns out they didn't need parent function at all. I'm kind of a noob so please don't yell at me too hard if this is a stupid question.

    Read the article

  • Can pdflatex (or any tex package) automatically rescale included images which have been reduced in s

    - by drfrogsplat
    I'm writing my thesis in LaTeX, generating it with pdflatex. I have a large number of figures, many of which are bitmaps (as opposed to SVG) in PNG/JPEG format. I've generally created them to be fairly high resolution (say 1600x1200-ish) to ensure that whatever size they end up in the document, they'll be at least 300dpi when printed. As I'm writing/laying out the document, I'm including graphics (using \includegraphics from the graphicx package) and setting widths/heights as appropriate (e.g. subfigures are quite small). I don't need the images to be any more than about 300 dpi at best, so where I have shrunk a 1600x1200 image down to say 5cm, the image is now at 800 dpi. So despite including some very small (on the page) images, the PDF is becoming quite large. Is there a way to tell pdflatex or graphicx (or something else involved?) to convert all images to a maximum of 300 dpi, based on the dimensions I'm setting with say \includegraphics[width=2in]{filename}? i.e. so it scales the image to a max of 600x600 pixels as it includes it in the PDF (leaving the original file untouched). I know I can resize the original images with various command line applications, and include the pre-resized versions, but given the images vary in size considerably, it wouldn't be as simple as making sure they're all 300dpi for a constant printed size. It'd also be nice to be able to easily create different versions of PDFs (web vs final print) without resizing images manually, so that the 'web' PDF capped images at say 72-100 dpi while the final print one could cap at 600 (if at all).

    Read the article

  • How can I share variables between a base class and subclass in Perl?

    - by Jonathan
    I have a base class like this: package MyClass; use vars qw/$ME list of vars/; use Exporter; @ISA = qw/Exporter/; @EXPORT_OK = qw/ many variables & functions/; %EXPORT_TAGS = (all => \@EXPORT_OK ); sub my_method { } sub other_methods etc { } --- more code--- I want to subclass MyClass, but only for one method. package MySubclass; use MyClass; use vars qw/@ISA/; @ISA = 'MyClass'; sub my_method { --- new method } And I want to call this MySubclass like I would the original MyClass, and still have access to all of the variables and functions from Exporter. However I am having problems getting the Exporter variables from the original class, MyClass, to export correctly. Do I need to run Exporter again inside the subclass? That seems redundant and unclear. Example file: #!/usr/bin/perl use MySubclass /$ME/; -- rest of code But I get compile errors when I try to import the $ME variable. Any suggestions?

    Read the article

  • Resizing video best practices (frame size)

    - by undefined
    I have read the following which is from Best Practices for Encoding Video with the VP6 Codec on the Adobe website here - http://www.adobe.com/devnet/flash/articles/encoding_video_print.html. It is talking about common video ratios (320x240, 640x480) Although these ratios are standard, and should be used to avoid distorting the video, the size of the encoded video is not set in stone. The original web video sizes used heights and widths that were evenly divisible by 16. This was mandatory for many early codecs. Although this is not necessary for modern codecs, you should stick to even heights and widths. What do they mean by 'even heights and widths'. I am thinking about encoding my video at 400x300 to make it slightly bigger, this is still 4x3 format but should I just stick at 320x240 and resize it on the screen? Clearly there are benefits to this in terms of storage size and delivery costs. In some places on my site I want to show the video at 400x300 but in others I want it to play full screen so this is why I am wondering if a larger original size (400x300) will give better results when blown up. Any thoughts?

    Read the article

  • Filtering Wikipedia's XML dump: error on some accents

    - by streetpc
    I'm trying to index Wikpedia dumps. My SAX parser make Article objects for the XML with only the fields I care about, then send it to my ArticleSink, which produces Lucene Documents. I want to filter special/meta pages like those prefixed with Category: or Wikipedia:, so I made an array of those prefixes and test the title of each page against this array in my ArticleSink, using article.getTitle.startsWith(prefix). In English, everything works fine, I get a Lucene index with all the pages except for the matching prefixes. In French, the prefixes with no accent also work (i.e. filter the corresponding pages), some of the accented prefixes don't work at all (like Catégorie:), and some work most of the time but fail on some pages (like Wikipédia:) but I cannot see any difference between the corresponding lines (in less). I can't really inspect all the differences in the file because of its size (5 GB), but it looks like a correct UTF-8 XML. If I take a portion of the file using grep or head, the accents are correct (even on the incriminated pages, the <title>Catégorie:something</title> is correctly displayed by grep). On the other hand, when I rectreate a wiki XML by tail/head-cutting the original file, the same page (here Catégorie:Rock par ville) gets filtered in the small file, not in the original… Any idea ? Alternatives I tried: Getting the file (commented lines were tried wihtout success): FileInputStream fis = new FileInputStream(new File(xmlFileName)); //ReaderInputStream ris = ReaderInputStream.forceEncodingInputStream(fis, "UTF-8" ); //(custom function opening the stream, reading it as UFT-8 into a Reader and returning another byte stream) //InputSource is = new InputSource( fis ); is.setEncoding("UTF-8"); parser.parse(fis, handler); Filtered prefixes: ignoredPrefix = new String[] {"Catégorie:", "Modèle:", "Wikipédia:", "Cat\uFFFDgorie:", "Mod\uFFFDle:", "Wikip\uFFFDdia:", //invalid char "Catégorie:", "Modèle:", "Wikipédia:", // UTF-8 as ISO-8859-1 "Image:", "Portail:", "Fichier:", "Aide:", "Projet:"}; // those last always work

    Read the article

  • CAShapeLayer slowing down interface rotation

    - by MrMage
    Hi, I am trying to move some custom drawing code from a view into a CAShapeLayer, which then get added as a sublayer to the original view's CALayer. This also works well, but when rotating the device, the animation starts to stutter, e.g. you just see the frame in the original orientation and then the final orientation, with at most one frame in between - not smooth at all. Slide-in and slide-out animations of the corresponding UIViewController are a bit jerky, too (but not that much). All the CAShapeLayer has in its path is one CGPathAddRect, it is set to be opaque, its opacity is 1.0f and the fillColor is set to opaque blue. When drawing the path directly in the views drawRect method, however, the animation is smooth. So I suppose it has something to do with the CAShapeLayer being animated during the rotation. Could you tell me how to either get rid of those jerkiness or just hide the CAShapeLayer when animating? Getting back to just draw CGPaths directly is not an option to me because I rely on the ability of CAShapeLayer to animate its path (it is not animated in my tries with rotating the view). /update: this also happens when the rotating UIViewControllers view contains a view with a subclass of CAGradientLayer as its layerClass (e.g. a view with a gradient layer as background). Cheers MrMage

    Read the article

  • Git checkout doesn't change anything, and it's getting very frustrating

    - by Josh
    I really like git. At least, I like the idea of git. Being able to checkout my master project as a separate branch where I can change whatever I want without risk of screwing everything else up is awesome. But it's not working. Every time I checkout a branch to another branch, make changes to the one branch, and then checkout the original branch, I still have all the files and changes that happened in the other branch. This is getting extremely frustrating. I've read that this can happen when you have files open in the IDE while doing this, but I've been pretty careful about that and both closed the files in the IDE, closed the IDE, and shut down my rails server before switching branches, and this still happens. Also, running 'git clean -f' either deletes everything that happened after some arbitrary commit (and randomly, at that), or, as in the latest case, didn't change anything back to its original state. I thought I was using git correctly, but at this point, I'm at my wit's end here. I'm trying to work with a bunch of experimental code using a stable version of my project, but I keep having to manually track down and fix all the changes I made. Any ideas or suggestions? git checkout -b photo_tagging git branch # to make sure it's right # make a bunch of changes, creations, etc git status # see what's changed since before git add . # approve of the changes, I guess, since if I do git commit after this, it says no changes git commit -m 'these are changes I made' git checkout master git branch #=> *master # look at files, tags_controller is still there, added in photo_tagging # and code added in photo_tagging branch are still there in *master This seems to happen whether I do a commit or not on the branch.

    Read the article

  • Best Pattern for AllowUnsafeUpdates

    - by webwires
    So far, in my research I have seen that it is unwise to set AllowUnsafeUpdates on GET request operation to avoid cross site scripting. But, if it is required to allow this, what is the proper way to handle the situation to mitigate any exposure? Here is my best first guess on a reliable pattern if you absolutely need to allow web or site updates on a GET request. Best Practice? protected override void OnLoad(System.EventArgs e) { if(Request.HttpMethod == "POST") { SPUtility.ValidateFormDigest(); // will automatically set AllowSafeUpdates to true } // If not a POST then AllowUnsafeUpdates should be used only // at the point of update and reset immediately after finished // NOTE: Is this true? How is cross-site scripting used on GET // and what mitigates the vulnerability? } // Point of item update SPSecurity.RunWithElevatedPrivledges(delegate() { using(SPSite site = new SPSite(SPContext.Current.Site.Url)) { using (SPWeb web = site.RootWeb) { bool allowUpdates = web.AllowUnsafeUpdates; //store original value web.AllowUnsafeUpdates = true; //... Do something and call Update() ... web.AllowUnsafeUpdates = allowUpdates; //restore original value } } }); Feedback on the best pattern is appreciated.

    Read the article

  • what is wrong in java AES decrypt function?

    - by rohit
    hi, i modified the code available on http://java.sun.com/developer/technicalArticles/Security/AES/AES_v1.html and made encrypt and decrypt methods in program. but i am getting BadpaddingException.. also the function is returning null.. why it is happing?? whats going wrong? please help me.. these are variables i am using: kgen = KeyGenerator.getInstance("AES"); kgen.init(128); raw = new byte[]{(byte)0x00,(byte)0x11,(byte)0x22,(byte)0x33,(byte)0x44,(byte)0x55,(byte)0x66,(byte)0x77,(byte)0x88,(byte)0x99,(byte)0xaa,(byte)0xbb,(byte)0xcc,(byte)0xdd,(byte)0xee,(byte)0xff}; skeySpec = new SecretKeySpec(raw, "AES"); cipher = Cipher.getInstance("AES"); plainText=null; cipherText=null; following is decrypt function.. public String decrypt(String cipherText) { try { cipher.init(Cipher.DECRYPT_MODE, skeySpec); byte[] original = cipher.doFinal(cipherText.getBytes()); plainText = new String(original); } catch(BadPaddingException e) { } return plainText; }

    Read the article

  • Ruby page loading very very slowly - how should I speed it up?

    - by Elliot
    Hey guys, I'm going to try and describe the code in my view, without actually posting all the garbage: It has a standard shell (header, footer etc. in the layout) this is also where the sub navigation exists which is based on a loop (to find the amount of options) - on this page, we have 6 subnav links. Then in the index view, we have a 3rd level nav - with 3 links that use javascript to link/hide divs on the page. This means each of those original 6 options, all have their own 3'rd level nav, with each of their own 3 div pages. These three pages/divs have the input form for creating a record in rails, and then the other 2 pages show the records in different assortments. ALL of this code lives on one page (aside from the shell). The original sub nav uses a javascript tab solution, to browse through all of it... (this means its about 6 divs, which all contain 4 divs of function - so about 24 heavy divs). Loading it seems to take forever, although after loaded its extremely fast (obviously). My big question, is how should I attack this? I don't know ajax - although I imagine it'd be a good solution for loading the tabs when clicked. Thanks! Elliot

    Read the article

  • Creating a DataTable by filtering another DataTable

    - by Jeff Dege
    I'm working on a system that currently has a fairly complicated function that returns a DataTable, which it then binds to a GUI control on a ASP.NET WebForm. My problem is that I need to filter the data returned - some of the data that is being returned should not be displayed to the user. I'm aware of DataTable.select(), but that's not really what I need. First, it returns an array of DataRows, and I need a DataTable, so I can databind it to the GUI control. But more importantly, the filtering I need to do isn't something that can be easily put into a simple expression. I have an array of the elements which I do not want displayed, and I need to compare each element from the DataTable against that array. What I could do, of course, is to create a new DataTable, reading everything out of the original, adding to the new what is appropriate, then databinding the new to the GUI control. But that just seems wrong, somehow. In this case, the number of elements in the original DataTable aren't likely to be enough that copying them all in memory is going to cause too much trouble, but I'm wondering if there is another way. Does the .NET DataTable have functionality that would allow me to filter via a callback function?

    Read the article

  • Convert ADO.Net EF Connection String To Be SQL Azure Cloud Connection String Compatible!?

    - by Goober
    The Scenario I have written a Silverlight 3 Application that uses an SQL Server database. I'm moving the application onto the Cloud (Azure Platform). In order to do this I have had to setup my database on SQL Azure. I am using the ADO.Net Entity Framework to model my database. I have got the application running on the cloud, but I cannot get it to connect to the database. Below is the original localhost connection string, followed by the SQL Azure connection string that isn't working. The application itself runs fine, but fails when trying to retrieve data. The Original Localhost Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Data Source=localhost; Initial Catalog=InmarsatZenith; Integrated Security=True; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Converted SQL Azure Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Server=tcp:MYSERVER.ctp.database.windows.net; Database=InmarsatZenith; UserID=MYUSERID;Password=MYPASSWORD; Trusted_Connection=False; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Question Anyone know if this connection string for SQL Azure is correct? Help greatly appreciated.

    Read the article

  • Problem with persisting a collection, that references an internal property, at design time in winfor

    - by Jules
    ETA: Jesus, I'm sick of this. This problem was specifically about persisting an interface collection but now on further testing it doesn't work for a normal collection. Here's some even simpler code: Public Class Anger End Class Public Class MyButton Inherits Button Private _Annoyance As List(Of Anger) <DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _ Public ReadOnly Property Annoyance() As List(Of Anger) Get Return _Annoyance End Get End Property Private _InternalAnger As Anger <DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _ Public ReadOnly Property InternalAnger() As Anger Get Return Me._InternalAnger End Get End Property Public Sub New() Me._Annoyance = New List(Of Anger) Me._InternalAnger = New Anger Me._Annoyance.Add(Me._InternalAnger) End Sub End Class The designer screws up the persistence code in the same way as the original problem. ---- Original Problem The easiest way to explain this problem is to show you some code: Public Interface IAmAnnoyed End Interface Public Class IAmAnnoyedCollection Inherits ObjectModel.Collection(Of IAmAnnoyed) End Class Public Class Anger Implements IAmAnnoyed End Class Public Class MyButton Inherits Button Private _Annoyance As IAmAnnoyedCollection <DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _ Public ReadOnly Property Annoyance() As IAmAnnoyedCollection Get Return _Annoyance End Get End Property Private _InternalAnger As Anger <DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _ Public ReadOnly Property InternalAnger() As Anger Get Return Me._InternalAnger End Get End Property Public Sub New() Me._Annoyance = New IAmAnnoyedCollection Me._InternalAnger = New Anger Me._Annoyance.Add(Me._InternalAnger) End Sub End Class And this is the code that the designer generates: Private Sub InitializeComponent() Dim Anger1 As Anger = New Anger Me.MyButton1 = New MyButton ' 'MyButton1 ' Me.MyButton1.Annoyance.Add(Anger1) // Should be: Me.MyButton1.Annoyance.Add(Me.MyButton1.InternalAnger) ' 'Form1 ' Me.Controls.Add(Me.MyButton1) End Sub I've added a comment to the above to show how the code should have been generated. Now, if I dispense with the interface and just have a collection of Anger, then it persists correctly. Any ideas?

    Read the article

  • Distributed Lock Service over MySql/GigaSpaces/Netapp

    - by ripper234
    Disclaimer: I already asked this question, but without the deployment requirement. I got an answer that got 3 upvotes, and when I edited the question to include the deployment requirement the answer then became irrelevant. The reason I'm resubmitting is because SO considers the original question 'answered', even though I got no meaningful upvoted answer. I opened a uservoice submission about this problem. The reason I reposted is so StackOverflow consider the original question answered, so it doesn't show up on the 'unanswered questions' tab. Which distributed lock service would you use? Requirements are: A mutual exclusion (lock) that can be seen from different processes/machines lock...release semantics Automatic lock release after a certain timeout - if lock holder dies, it will automatically be freed after X seconds Java implementation Easy deployment - must not require complicated deployment beyond either Netapp, MySql or GigaSpaces. Must play well with those products (especially GigaSpaces - this is why TerraCotta was ruled out). Nice to have: .Net implementation If it's free: Deadlock detection / mitigation I'm not interested in answers like "it can be done over a database", or "it can be done over JavaSpaces" - I know. Relevant answers should only contain a ready, out-of-the-box, proven implementation.

    Read the article

  • Saving NSBitmapImageRep as NSBMPFileType file. Wrong BMP headers and bitmap content

    - by niko34
    I save a NSBitmapImageRep to a BMP file (Snow Leopard). It seems ok when i open it on macos. But it makes an error on my multimedia device (which can show any BMP file from internet). I cannot figure out what is wrong, but when i look inside the file (with the cool hexfiend app on macos), 2 things wrong: the header have a wrong value for the biHeight parameter : 4294966216 (hex=C8FBFFFF) the header have a correct biWidth parameter : 1920 the first pixel in the bitmap content (after 54 bytes headers in BMP format) correspond to the upper left corner of the original image. In the original BMP file and as specified in the BMP format, it should be the down left corner pixel first. To explain the full workflow in my app, i have an NSImageView where i can drag a BMP image. This View is bind to an NSImage. After a drag & drop i have an action to save this image (with some text drawing over it) to a BMP file. Here's the code for saving the new BMP file : CGColorSpaceRefcolorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB); CGContextRefcontext = CGBitmapContextCreate(NULL, (int)1920, (int)1080, 8, 4*(int)1920, colorSpace, kCGImageAlphaNoneSkipLast); [duneScreenViewdrawBackgroundWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) needScale:NO]; if(folderType==DXFolderTypeMovie) { [duneScreenViewdrawSynopsisContentWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) withScale:1.0]; } CGImageRef backgroundImageRef = CGBitmapContextCreateImage(context); NSBitmapImageRep*bitmapBackgroundImageRef = [[NSBitmapImageRepalloc] initWithCGImage:backgroundImageRef]; NSData*data = [destinationBitmap representationUsingType:NSBMPFileType properties:nil]; [data writeToFile:[NSStringstringWithFormat:@"%@/%@", folderPath,backgroundPath] atomically: YES]; The duneScreenViewdrawSynopsisContentWithDuneFolder method uses CGContextDrawImage to draw the image. The duneScreenViewdrawSynopsis method uses CoreText to draw some text in the same context. Do you know what's wrong?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >