Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 492/590 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • Delphi Pascal / Windows API - Small problem with SetFilePointerEx and parameter FILE_END

    - by SuicideClutchX2
    I know I am about to be slapped by at least one person who was helping me with this API. Alright I have been able to use SetFilePointerEx just fine, when setting the position only. SetFilePointerEx(PD,512,@PositionVar,FILE_BEGIN); SetFilePointerEx(PD,0,@PositionVar,FILE_CURRENT); Both work, I can set positions and even check my current one. But when I set FILE_END as per the documentation no matter what the second parameter is and whether or not i provide a pointer for the third parameter it fails even on a valid handle that many other operations are able to use without fail. For Example: SetFailed := SetFilePointerEx(PD,0,@PositionVar,FILE_END); SetFailed := SetFilePointerEx(PD,0,nil,FILE_END); Whatever I put it fails. I am working with a handle to a physical disk and it most definitely has an end. SetFilePointer works just fine its just a little more trouble than I would like. Its not the end of the world, but whats happening.

    Read the article

  • Apply CSS style to anchor problem

    - by Jake
    Using jquery I have a clicking tab mechanism that are nothing but anchor tags that return false but call javascript functions to run some events on the page. The problem is I am using jquery to apply an opacity style to the active anchor. and the other sibling anchor get a lesser opacity view. My code looks like this $("#menutab li a").click(function(){ $(this).animate({opacity:'1'},1000); $(this).siblings().animate({opacity:'.25'},1000); } I would think this code would act only on the clicked element and apply that css style to that element and the other style to the other anchor tags except the clicked one. It kind of does that, but also what it does is leave the earlier clicked element to opacity =1, so if I click an element it sets it opacity to 1 and then if I click another one it sets it opacity to 1 while leave the earlier clicked one to 1 also instead of setting it to .25 like the others. Edit: I changed the above code to: $("#menutab ul li").click(function(){ $(this).children().animate({opacity:'1'},1000); $(this).siblings().children().animate({opacity:'.25'},1000); }); and now I get the desired effect, except that when the first anchor in the list is clicked doesn't follow the event rules, When the first one is clicked its as if, the click event is not triggered, because no opacity style changes. which I don't understand.

    Read the article

  • DD_belatedPNG.js - how to access the vml object? this is for a PNG image-swap.

    - by akc
    I am trying to use Drew Dillard's awesome DD_belatedPNG fix + jQuery to achieve a run-of-the-mill image-swap on hover -- but with PNGs, and to work on IE6. Example: <a id="thelink" href="blah.html"><img src="f-u-ie6.png" /></a> Since DD's script sets the visibility of the original image to "hidden", you can't effectively hover over it. A lot of people, I have noticed, are thwarted by this limitation. Enough so that Drew mentioned he would try to get a work-around into the next version of his PNG fix. Well, in the meantime, I thought I could get around this by handling the hover event on the image's parent instead. So onmouseover, I would hide the VML object created by DD_belatedPNG while setting a background image on "thelink", and onmouseout, show the VML object again and set the background image to nothing. The following code was just to see if I could access the VML object, but it does not work on the VML. It hides all manner of other children, but not the VML. Any ideas? $(document).ready(function(){ $("thelink").hover(function() { $(this).children().attr({ style: "visibility:hidden" }); }, function() { $(this).children().attr({ style: "visibility:visible" }); }); }); Alternatively, can anyone suggest a great PNG image-swap method? I know that you can swap a background image of a link. But you still need to have something inside the A tag. That's not my case. Also, you could put a transparent GIF in the A tag and have the background image swapped to achieve the effect, but I really don't want to do that. Thanks for your insights!

    Read the article

  • SharePoint randomly replacing file names in web parts?

    - by nvuono
    Ok SharePoint is driving me crazy and I need to see if anyone has encountered a similar problem or knows of a solution: I have a content editor webpart with some HTML including links to PDF files that I've modified slightly to append an employee number querystring ie: <a href="http://moss.company.com/group/home/EPermits /Blank%20Form%20Templates/_blank_breach_permit.pdf?empNum=">New Breach Permit</a> And SharePoint seems to randomly replace the filename with aab04168 or some other similar characters: <a href="http://moss.company.com/group/home/EPermits /Blank%20Form%20Templates/aab04168?empNum=">New Breach Permit</a> After this happened a few times with no explanation I tried changing the content editor webpart to look directly at a documentLinks.html file located in the Shared Documents folder of the SharePoint site and guess what... SharePoint edited that document and replaced my filenames with random characters in there too! Figuring that filenames beginning with an underscore could be triggering some internal SharePoint procedures I've renamed all the files to remove the starting underscore--unfortunately the problem isn't immediately reproducible and I'm waiting right now to see if I run into any more trouble. edit: the underscore in the filename didn't help... my documentLinks.html wound up getting modified and all the hrefs were replaced with random characters again. Now I'm setting the hrefs in javascript with the filename text concatenated together from multiple strings. linkEle.href = ".../EPermits/Blank%20Form%20Templates/blank" + "_Chemical_Usage.pdf?empNum=" + empNumber;

    Read the article

  • Switching languajes on a website with PHP

    - by jnkrois
    Hello everybody, I'm just looking for some advice. I'm creating a website that offers (at least) 2 languages. The way I'm setting it up is by using XML files for the language, PHP to retrieve the values in the XML nodes. Say you have any XML file, being loaded as follows: <?php $lang = "en"; $xmlFile = simplexml_load_file("$lang/main.xml"); ?> Once the file contents are available, I just output each node into an HTML tag like so: <li><?php echo $xmlFile->navigation->home; ?></li> which in turn is equal to : <li><a href="#">Home</a></li> as a nav bar link. Now, the way in which I'm switching languages is by changing the value of the "$lang" variable, through a "$_POST", like so: if(isset($_POST['es'])){ $lang = "es"; }elseif(isset($_POST['en'])){ $lang = "en"; } The value of the "$lang" variable is reset and the new file is loaded, loading as well all the new nodes from the new XML file, hence changing the language. I'm just wondering if there is another way to reset the "$lang" variable using something else, other than "$_POST" or "$_GET". I don't want to use query string either. I know I could use JavaScript or jQuery to achieve this, but I'd like to make the site not too dependable on JavaScript. I'd appreciate any ideas or advice. Thanks

    Read the article

  • CakePHP: Missing database table

    - by Justin
    I have a CakePHP application that is running fine locally. I uploaded it to a production server and the first page that uses a database connection gives the "Missing Database Table" error. When I look at the controller dump, it's complaining about the first table. I've tried a variety of things to fix this problem, with no luck: I've confirmed that at the command line I can login with the given MySQL credentials in database.php I've confirmed this table exists I've tried using the MySQL root credentials (temporarily) to see if the problem lies with permissions of the user. The same error appeared. My debug level is currently set to 3 I've deleted the entire contents of /app/tmp/cache I've set 777 permissions on /app/tmp* I've confirmed that I can run DESCRIBE commands at the commant line MySQL when logged in with the MySQL credentials used by by the application I've verified that the CakePHP log file only contains the error I'm setting in the browser window. I've tried all the suggestions I could find in similar postings on SO I've Googled around and didn't find any other ideas I think I've eliminating the obvious problems and my research isn't turning anything up. I feel like I'm missing something obvious. Any ideas?

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • String Length Evaluating Incorrectly

    - by Justin R.
    My coworker and I are debugging an issue in a WCF service he's working on where a string's length isn't being evaluated correctly. He is running this method to unit test a method in his WCF service: // Unit test method public void RemoveAppGroupTest() { string addGroup = "TestGroup"; string status = string.Empty; string message = string.Empty; appActiveDirectoryServicesClient.RemoveAppGroup("AOD", addGroup, ref status, ref message); } // Inside the WCF service [OperationBehavior(Impersonation = ImpersonationOption.Required)] public void RemoveAppGroup(string AppName, string GroupName, ref string Status, ref string Message) { string accessOnDemandDomain = "MyDomain"; RemoveAppGroupFromDomain(AppName, accessOnDemandDomain, GroupName, ref Status, ref Message); } public AppActiveDirectoryDomain(string AppName, string DomainName) { if (string.IsNullOrEmpty(AppName)) { throw new ArgumentNullException("AppName", "You must specify an application name"); } } We tried to step into the .NET source code to see what value string.IsNullOrEmpty was receiving, but the IDE printed this message when we attempted to evaluate the variable: 'Cannot obtain value of local or argument 'value' as it is not available at this instruction pointer, possibly because it has been optimized away.' (None of the projects involved have optimizations enabled). So, we decided to try explicitly setting the value of the variable inside the method itself, immediately before the length check -- but that didn't help. // Lets try this again. public AppActiveDirectoryDomain(string AppName, string DomainName) { // Explicitly set the value for testing purposes. AppName = "AOD"; if (AppName == null) { throw new ArgumentNullException("AppName", "You must specify an application name"); } if (AppName.Length == 0) { // This exception gets thrown, even though it obviously isn't a zero length string. throw new ArgumentNullException("AppName", "You must specify an application name"); } } We're really pulling our hair out on this one. Has anyone else experienced behavior like this? Any tips on debugging it?

    Read the article

  • Detect remote charset in php

    - by yallaa
    Hello, I would like to determine a remote page's encoding through detection of the Content-Type header tag <meta http-equiv="Content-Type" content="text/html; charset=XXXXX" /> if present. I retrieve the remote page and try to do a regex to find the required setting if present. I am still learning hence the problem below... Here is what I have: $EncStart = 'charset='; $EncEnd = '" \/\>'; preg_match( "/$EncStart(.*)$EncEnd/s", $RemoteContent, $RemoteEncoding ); echo = $RemoteEncoding[ 1 ]; The above does indeed echo the name of the encoding but it does not know where to stop so it prints out the rest of the line then most of the rest of the remote page in my test. Example: When testing a remote russian page it printed: windows-1251" / rest of page .... Which means that $EncStart was okay, but the $EncEnd part of the regex failed to stop the matching. This meta header usually ends in 3 different possibility after the name of the encoding. "> | "/> | " /> I do not know weather this is usable to satisfy the end of the maching and if yes how to escape it. I played with different ways of doing it but none worked. Thank you in advance for lending a hand.

    Read the article

  • How do I get the WVGA Android browser to stop scaling my images?

    - by Dan Fabulich
    I'm designing an HTML page for display in Android browsers. Consider this simple example page: <html> <head><title>Simple!</title> </head> <body> <p><img src="http://sstatic.net/so/img/logo.png"></p> </body> </html> It looks just fine on the standard HVGA phones (320x480), but on HDPI WVGA sizes (480x800 or 480x854) the built-in browser automatically scales the image up; it looks ugly. I've read that I should be able to use this tag to force the browser to stop scaling my page: <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0; minimum-scale=1.0; user-scalable=0;" /> ... but all that does is disable user scaling (the zoom buttons disappear); it doesn't actually prevent the browser from scaling my image. Adjusting the scale factors (setting them all to 2.0 or 0.5) has no effect at all. How can I force the WVGA browser to stop scaling my images?

    Read the article

  • Is there any way to filter certain things in pages served by IIS?

    - by Ruslan
    Hello, This is my first time posting here so please keep that in mind... I'll try to be short and get right to defining the problem. We have an ASP.NET 2 application (eCommerce package) running on IIS (Windows Server 2003). The main site's page(s) are using plain HTTP (no SSL), but the whole checkout process and the shopping cart page is using SSL (HTTPS). Now, the problem is that the site's header is located in a template file, and inside it it has a plain HTML 'img' tag calling an image with the "http://" portion hard-coded into it... This header appears on absolutely every page (including the https pages), and due to its insecure image tag, a warning box pops up in IE on every stage of the checkout process... Now, the problem: The live application cannot be touched in any way (no changes can be made to the template (so simply changing "http://" to "//" is not an option), IIS cannot be restarted, and the website/app pool cannot be restarted). Is there any way in the world (maybe plugin for IIS or a setting somewhere) that I can filter the pages right before they are served to replace the '<img src="http://example.com/image.jpg">' with '<img src="//example.com/image.jpg">' in the final HTML? Possibly via a regular expression or something? Thanks to everybody in advance.

    Read the article

  • Passing variables to a Custom Zend Form Element

    - by user322003
    Hi, I'm trying to create a custom form element which extends Zend_Form_Element_Text with a validator (so I don't have to keep setting up the validator when I use certain elements). Anyway, I'm having trouble passing $maxChars variable to it when I instantiate it in my Main form. I've provided my shortened code below This is my custom element below class My_Form_Custom_Element extends Zend_Form_Element_Text { public $maxChars public function init() { $this->addValidator('StringLength', true, array(0, $this->maxChars)) } public function setProperties($maxChars) { $this->maxChars= $maxChars; } } This is where I instantiate my custom form element. class My_Form_Abc extends Zend_Form { public function __construct($options = null) { parent::__construct($options); $this->setName('abc'); $customElement = new My_Form_Custom_Element('myCustomElement'); $customElement->setProperties(100); //**<----This is where i set the $maxChars** $submit = new Zend_Form_Element_Submit('submit'); $submit -> setAttrib('id', 'submitbutton'); $this->addElements(array($customElement ,$submit)); } } When I try to pass '100' using $customElement-setProperties(100) in my Form, it doesnt get passed properly to my StringLength validator. I assume it's because the validator is getting called in Init? How can I fix this?

    Read the article

  • how to play an audio soundclip when a nib is loaded - welcome screen - xcode

    - by Pavan
    I would like to do the following two things: 1) I would like to play an audio file qhen a nib is loaded 2) After that i would like to switch views when the audio file has finished playing. This will be easy as i just need to call the event that initiates the change of view by using the delegate method -(void) audioPlayerDidFinishPlaying{ //code to change view } I dont know how to to play the audio file when a nib is loaded. Using the AVFoundation framework, I tried doing the following after setting up the audio player and the variables associated with it in the appropriate places i wrote the following: - (void)viewDidLoad { [super viewDidLoad]; NSString *soundFilePath = [[NSBundle mainBundle] pathForResource: @"sound" ofType: @"mp3"]; NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: soundFilePath]; AVAudioPlayer *newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL: fileURL error:nil]; [fileURL release]; self.player = newPlayer; [newPlayer release]; [player prepareToPlay]; [player setDelegate: self]; [player play]; } Although this does not play the file as the viewdidload method gets called before the nib is shown so the audio is never played or heard. What do i need to do so that i can play the audio file AFTER the nib has loaded and is shown on the screen? can someone please help me as ive been working on this for 3 hours now. Thanks in advance.

    Read the article

  • how can a Win32 App plugin load its DLL in its own directory

    - by Jean-Denis Muys
    My code is a plugin for a specific Application, written in C++ using Visual Studio 8. It uses two DLL from an external provider. Unfortunately, my plugin fails to start because the DLLs are not found (I put them in the same directory as the plugin itself). When I manually move or copy the DLLs to the host application directory, then the plugin loads fine. This moving was deemed unacceptably cumbersome for the end user, and I am looking for a way for my plugin to load its DLLs transparently. What can I do? Relevant details: the host Application plugins are located in a directory mandated by the host application. That directory is not in the DLL search path and I don't control it. The plugin is itself packaged as a subdirectory of the plugin directory, holding the plugin code itself, but also any resource associated with the plugin (eg images, configuration files…). I control what's inside that subdirectory, called a "bundle", but not where it's located. the common plugin installation idiom for that App is for the end user to copy the plugin bundle to the plugin directory. This plugin is a port from the Macintosh version of the plugin. On the Mac there is no issue because each binary contains its own dynamic library search path, which I set as I needed to for my plugin binary. To set that on the Mac simply involves a project setting in the Xcode IDE. This is why I would hope for something similar in Visual Studio, but I could not find anything relevant. Moreover, Visual Studio's help was anything but, and neither was Google. A possible workaround would be for my code to explicitly tell Windows where to find the DLL, but I don't know how, and in any case, since my code is not even started, it hasn't got the opportunity to do so. As a Mac developer, I realize that I may be asking for something very elementary. If such is the case, I apologize, but I have run out of hair to pull out.

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • jQueryUI Modal confirmation dialog on form submission

    - by DavidYell
    I am trying to get a modal confirmation dialog working when a user submits a form. My approach, I thought logically, would be to catch the form submission. My code is as follows, $('#multi-dialog-confirm').dialog({ autoOpen: false, height: 200, modal: true, resizable: false, buttons: { 'Confirm': function(){ //$(this).dialog('close'); return true; }, 'Cancel': function(){ $(this).dialog('close'); return false; } } }); $('#completeform').submit(function(e){ e.preventDefault(); var n = $('#completeform input:checked').length; if(n == 0){ alert("Please check the item and mark as complete"); return false; }else{ var q = $('#completeform #qty').html(); if(q > 1){ $('#multi-dialog-confirm').dialog('open'); } } //return false; }); So I'm setting up my dialog first. This is because I'm pretty certain that the scope of the dialog needs to be at the same level as the function which calls it. However, the issue is that when you click 'Confirm' nothing happens. The submit action does not continue. I've tried $('#completeform').submit(); also, which doesn't seem to work. I have tried removing the .preventDefault() to ensure that the form submission isn't completely cancelled, but it doesn't seem to make a difference between that and returning false. Not checking the box, show and alert fine. Might change to dialog at some point ;), clicking 'Cancel' closes the dialog and remains on the page, but the elusive 'Confirm' buttons seems to not continue with the form submission event. If anyone can help, I'd happily share my lunch with you! ;)

    Read the article

  • retrieving multiple versions through API through hbase

    - by sammy
    hello , this is a continuation of my previous question where id used hbase shell.. http://stackoverflow.com/questions/3024417/facing-problems-while-updating-rows-in-hbase i tried the same with API.. im not able to figure out how to retrieve all versions , iterate and print their values for a specific row... i've spending hours reading... please help me out... Scan s = new Scan(Bytes.toBytes("row1")); s.addColumn(Bytes.toBytes("column"),Bytes.toBytes("address")); SETTING RANGE FOR THE VERSIONS s.setTimeRange(0L,6L); ResultScanner scanner = table.getScanner(s); for (Result r : scanner) { for(KeyValue kv : r.sorted()) { System.out.println("To"+kv.getTimestamp()); System.out.println("from "+Bytes.toString(kv.getKey())); System.out.println("To "+Bytes.toString(kv.getValue())); } scanner.close(); } here im intending to print all versions of the column..... but it gives the most recent one... im stuck here...

    Read the article

  • Set Custom ASP.NET UserControl variables when its in a Repeater

    - by tnriverfish
    <%@ Register Src="~/Controls/PressFileDownload.ascx" TagName="pfd" TagPrefix="uc1" %> <asp:Repeater id="Repeater1" runat="Server" OnItemDataBound="RPTLayer_OnItemDataBound"> <ItemTemplate> <asp:Label ID="LBLHeader" Runat="server" Visible="false"></asp:Label> <asp:Image ID="IMGThumb" Runat="server" Visible="false"></asp:Image> <asp:Label ID="LBLBody" Runat="server" class="layerBody"></asp:Label> <uc1:pfd ID="pfd1" runat="server" ShowContainerName="false" ParentContentTypeId="55" /> <asp:Literal ID="litLayerLinks" runat="server"></asp:Literal> </ItemTemplate> </asp:Repeater> System.Web.UI.WebControls.Label lbl; System.Web.UI.WebControls.Literal lit; System.Web.UI.WebControls.Image img; System.Web.UI.WebControls.HyperLink hl; System.Web.UI.UserControl uc; I need to set the ParentItemID variable for the uc1:pdf listed inside the repeater. I thought I should be able to find uc by looking in the e.Item and then setting it somehow. I think this is the part where I'm missing something. uc = (UserControl)e.Item.FindControl("pfd1"); if (uc != null) { uc.Attributes["ParentItemID"] = i.ItemID.ToString(); } Any thoughts would be appreciated. Also tried this with similar results... when I debug inside my usercontrol (pfd1) the parameters I am trying to set have not been set. uc = (UserControl)e.Item.FindControl("pfd1"); if (uc != null) { uc.Attributes.Add("ContainerID", _cid.ToString()); uc.Attributes.Add("ParentItemId", i.ItemID.ToString()); }

    Read the article

  • javaScript .splice() not working on correctly

    - by adardesign
    I am setting a cookie for each navigation container that is clicked on. It sets an array that is joined and set the cookie value. if its clicked again then its removed from the array. It somehow buggy. It only splices after clicking on other elements. and then it behaves weird. Thanks much. var navLinkToOpen; var setNavCookie = function(value){ var isSet = false; var checkCookies = checkNavCookie() setCookieHelper = checkCookies? checkCookies.split(","): []; console.log("value passed", value) for(i in setCookieHelper){ if(value == setCookieHelper[i]){ setCookieHelper.splice(value,1); isSet = true; } } if(!isSet){ setCookieHelper.push(value) } setCookieHelper.join(",") document.cookie = "navLinkToOpen"+"="+setCookieHelper; } var checkNavCookie = function(){ var allCookies = document.cookie.split( ';' ); for (i = 0; i < allCookies.length; i++ ){ temp = allCookies[i].split("=") if(temp[0].match("navLinkToOpen")){ var getValue = temp[1] } } return getValue || false } $(document).ready(function() { $("#LeftNav li").has("b").addClass("navHeader").not(":first").siblings("li").hide() $(".navHeader").click(function(){ $(this).toggleClass("collapsed").nextUntil("li:has('b')").slideToggle(300); setNavCookie($('.navHeader').index($(this))) return false }) console.log("init",document.cookie) var testCookies = checkNavCookie(); if(testCookies){ finalArrayValue = testCookies.split(",") for(i in finalArrayValue){ $(".navHeader").eq(finalArrayValue[i]).toggleClass("collapsed").nextUntil(".navHeader").slideToggle (0); } } });

    Read the article

  • Multiple SessionFactories in Windows Service with NHibernate

    - by Rob Taylor
    Hi all, I have a Webapp which connects to 2 DBs (one core, the other is a logging DB). I must now create a Windows service which will use the same business logic/Data access DLLs. However when I try to reference 2 session factories in the Service App and call the factory.GetCurrentSession() method, I get the error message "No session bound to current context". Does anyone have a suggestion about how this can be done? public class StaticSessionManager { public static readonly ISessionFactory SessionFactory; public static readonly ISessionFactory LoggingSessionFactory; static StaticSessionManager() { string fileName = System.Configuration.ConfigurationSettings.AppSettings["DefaultNHihbernateConfigFile"]; string executingPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase); fileName = executingPath + "\\" + fileName; SessionFactory = cfg.Configure(fileName).BuildSessionFactory(); cfg = new Configuration(); fileName = System.Configuration.ConfigurationSettings.AppSettings["LoggingNHihbernateConfigFile"]; fileName = executingPath + "\\" + fileName; LoggingSessionFactory = cfg.Configure(fileName).BuildSessionFactory(); } } The configuration file has the setting: <property name="current_session_context_class">call</property> The service sets up the factories: private ISession _session = null; private ISession _loggingSession = null; private ISessionFactory _sessionFactory = StaticSessionManager.SessionFactory; private ISessionFactory _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; ... _sessionFactory = StaticSessionManager.SessionFactory; _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; _session = _sessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_session); _loggingSession = _loggingSessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_loggingSession); So finally, I try to call the correct factory by: ISession session = StaticSessionManager.SessionFactory.GetCurrentSession(); Can anyone suggest a better way to handle this? Thanks in advance! Rob

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Lucene wildcard queries

    - by Javi
    Hello, I have this question relating to Lucene. I have a form and I get a text from it and I want to perform a full text search in several fields. Suppose I get from the input the text "textToLook". I have a Lucene Analyzer with several filters. One of them is lowerCaseFilter, so when I create the index, words will be lowercased. Imagine I want to search into two fields field1 and field2 so the lucene query would be something like this (note that 'textToLook' now is 'texttolook'): field1: texttolook* field2:texttolook* In my class I have something like this to create the query. I works when there is no wildcard. String text = "textToLook"; String[] fields = {"field1", "field2"}; //analyser is the same as the one used for indexing Analyzer analyzer = fullTextEntityManager.getSearchFactory().getAnalyzer("customAnalyzer"); MultiFieldQueryParser parser = new MultiFieldQueryParser(fields, analyzer); org.apache.lucene.search.Query queryTextoLibre = parser.parse(text); With this code the query would be: field1: texttolook field2:texttolook but If I set text to "textToLook*" I get field1: textToLook* field2:textToLook* which won't find correctly as the indexes are in lowercase. I have read in lucene website this: " Wildcard, Prefix, and Fuzzy queries are not passed through the Analyzer, which is the component that performs operations such as stemming and lowercasing" My problem cannot be solved by setting the behaviour case insensitive cause my analyzer has other fields which for examples remove some suffixes of words. I think I can solve the problem by getting how the text would be after going through the filters of my analyzer, then I could add the "*" and then I could build the Query with MultiFieldQueryParser. So in this example I woud get "textToLower" and after being passed to to these filters I could get "texttolower". After this I could make "textotolower*". But, is there any way to get the value of my text variable after going through all my analyzer's filters? How can I get all the filters of my analyzer? Is this possible? Thanks

    Read the article

  • AJAX contact form in CodeIgniter

    - by Ross
    Few questions: I'm using CI and JQuery AJAX. In my code below, I assemble dataString, which by default, is appended to the URL as a query string. I've changed the AJAX "type" to POST, so my question is - how do I access dataString in my CI app? It would seem I still have to use $name=$this->input->post('name') Which to me, makes setting dataString redundant? -- I've tried searching but can't really find anything concrete. Would it be possible to still make use of CIs validation library and AJAX? if($this->form_validation->run() == FALSE) { // what can i return so that my CI app shows errors? } Normally you would reload the contact form or redirect the user. In an ideal world I would like the error messages to be shown to the user. Jquery: $(document).ready(function($){ $("#submit_btn").click(function(){ var name = $("input#name").val(); var company = $("input#company").val(); var email = $("input#email").val(); var phone = $("input#phone").val(); var message = $("textarea#message").val(); var dataString = 'name=' + name + '&message=' + message + '&return_email=' + email + '&return_phone=' + phone + '&company=' + company; var response = $.ajax({ type: "POST", url: "newsite/contact_ajax/", data: dataString }).responseText; //$('#contact').hide(); //$('#contact').html('<h5>Form submitted! Thank you!</h5><h4>We will be in touch with you soon.</h4>'); //$('#contact').fadeIn('slow'); return false; }); }); hope i've been clear enough - if anyone has a decent example of a CI contact form that would be great. there's mixed stuff on the internet but nothing that hits all the boxes. thanks

    Read the article

  • Google App Engine, parsedatetime and TimeZones

    - by Ron
    Hey guys, I'm working on a Google App Engine / Django app and I encountered the following problem: In my html I have an input for time. The input is free text - the user types "in 1 hour" or "tomorrow at 11am". The text is then sent to the server in AJAX, which parses it using this python library: http://code.google.com/p/parsedatetime/. Once parsed, the server returns an epoch timestamp of the time. Here is the problem - Google App Engine always runs on UTC. Therefore, lets say that the local time is now 11am and the UTC time is 2am. When I send "now" to the server it will return "2am", which is good because I want the date to be received in UTC time. When I send "in 1 hour" the server will return "3am" which is good, again. However, when I send "at noon" the server will return "12pm" because it thinks that I'm talking about noon UTC - but really I need it to return 3am, which is noon for the request sender.. I can pass on the TZ of the browser that sends the request, but that wont really help me - the parsedatetime library wont take a timezone argument (correct me if I'm wrong). Is there a walk around this? Maybe setting the environments TZ somehow? Thanks!

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >