Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 498/606 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • eclipse, one classpath for compiling, another for launching

    - by DragonFax
    example: For logging, my code uses log4j. but other jars my code is dependent upon, uses slf4j instead. So both jars must be in the build path. Unfortunately, its possible for my code to directly use (depend on) slf4j now, either by context-assist, or some other developers changes. I would like any use of slf4j to show up as an error, but my application (and tests) will still need it in the classpath when running. explanation: I'd like to find out if this is possible in eclipse. This scenario happens often for me. I'll have a large project, that uses alot of 3rd party libraries. And of course those 3rd party jars have their own dependencies as well. So I have to include all dependencies in the classpath ("build path" in eclipse) for the application and its tests to compile and run (from within eclipse). But I don't want my code to use all of those jars, just the few direct dependencies I've decided upon myself. So if my code accidentally uses a dependency of a dependency, I want it to show up as a compilation error. Ideally, as class not found, but any error would do. I know I can manually configure the classpath when running outside of eclipse, and even within eclipse I can modify the classpath for a specific class I'm running (in the run configurations), but thats not manageable if you run alot of individual test cases, or have alot of main() classes.

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • Reporting Services keeps erasing my dataset parameters

    - by Dustin Brooks
    I'm using a web service and every time I change something on the dataset, it erases all my parameters. The weird thing is, I can execute the web service call from the data tab and it prompts for all my parameters, but if I click to edit the data the list is empty or if I try to preview the report it blows up because parameters are missing. Just wondering if anyone else has experienced this and if there is a way to prevent this behavior. Here is a copy of the dataset, not that I think it matters. This has to be the most annoying bug (if its a bug) ever. I can't even execute the dataset from the designer without it erasing my parameter list. When you have about 10 parameters and you are making all kinds of changes to a new report, it becomes very tedious to be constantly re-typing the same list over and over. If anything, studio should at least be able to pre-populate with the parameters the service is asking for. sigh Wheres my stress ball... <Query> <Method Namespace="http://www.abc.com/" Name="TWRPerformanceSummary"/> <SoapAction>http://www.abc.com/TWRPerformanceSummary</SoapAction> <ElementPath IgnoreNamespaces="true"> TWRPerformanceSummaryResponse/TWRPerformanceSummaryResult/diffgram/NewDataSet/table{StockPerc,RiskBudget,Custodian,ProductName,StartValue(decimal),EndValue(decimal),CostBasis(decimal)} </ElementPath> </Query>

    Read the article

  • Can I specify the files to commit in subversion in a file rather than on the command line?

    - by René Nyffenegger
    I have renamed (with svn move) a lot of files in a subversion project. Now, I am trying to commit these on Window's cmd.exe. It seems that I hit a limit (probably by cmd.exe) in that the number of files is too long for the command line to swallow. Now, I thought and hoped that I could list the files to commit in a seperate file that I could specify with the commit command (something like svn ci --files-to-commit=renamed-files.txt -m "Renamed a lot of files" Yet, either such an option does not exist or I am unable to find this. Unfortunately, I cannot do a svn ci . as I have done other changes in the project as well. Neither can I do a svn ci *pattern-of-renamed-files* since this would only check in the added files, not the deleted ones. Before I start checking in the files with smaller chunks of files to check in (and thus increase the revision number uneccesserily without giving a hint as to the 'atomicity' of the operation) I thought I ask if this is indeed impossible to do.

    Read the article

  • Traceroute comparison and statistics

    - by ben-casey
    I have a number of traceroutes that i need to compare against each other but i dont know the best way to do it, ive been told that hash maps are a good technique but i dont know how to implement them on my code. so far i have: FileInputStream fstream = new FileInputStream("traceroute.log"); // Get the object of DataInputStream DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String strLine; // reads lines in while ((strLine = br.readLine()) != null) { System.out.println(strLine); } and the output looks like this: Wed Mar 31 01:00:03 BST 2010 traceroute to www.bbc.co.uk (212.58.251.195), 30 hops max, 40 byte packets 1 139.222.0.1 (139.222.0.1) 0.873 ms 1.074 ms 1.162 ms 2 core-from-cmp.uea.ac.uk (10.0.0.1) 0.312 ms 0.350 ms 0.463 ms 3 ueaha1btm-from-uea1 (172.16.0.34) 0.791 ms 0.772 ms 1.238 ms 4 bound-from-ueahatop.uea.ac.uk (193.62.92.71) 5.094 ms 4.451 ms 4.441 ms 5 gi0-3.norw-rbr1.eastnet.ja.net (193.60.0.21) 4.426 ms 5.014 ms 4.389 ms 6 gi3-0-2.chel-rbr1.eastnet.ja.net (193.63.107.114) 6.055 ms 6.039 ms * 7 lond-sbr1.ja.net (146.97.40.45) 6.994 ms 7.493 ms 7.457 ms 8 so-6-0-0.lond-sbr4.ja.net (146.97.33.154) 8.206 ms 8.187 ms 8.234 ms 9 po1.lond-ban4.ja.net (146.97.35.110) 8.673 ms 6.294 ms 7.668 ms 10 bbc.lond-sbr4.ja.net (193.62.157.178) 6.303 ms 8.118 ms 8.107 ms 11 212.58.238.153 (212.58.238.153) 6.245 ms 8.066 ms 6.541 ms 12 212.58.239.62 (212.58.239.62) 7.023 ms 8.419 ms 7.068 ms what i need to do is compare this trace against another one just like it and look for the changes and time differences etc, then print a stats page.

    Read the article

  • no such file to load -- rubygems (LoadError)

    - by Vineeth
    Hello, I recently installed rails in fedora 12. I'm new to linux as well. Everything works fine on Windows 7. But I'm facing lot of problems in linux. Help please! I've installed all the essentials to my knowledge to get the basic script/server up and running. I have this error from boot.rb popping up when I try script/server. Some of the details I'd like to give here: The directories where rails, ruby and gem are installed, [vineeth@localhost my_app]$ which ruby /usr/local/bin/ruby [vineeth@localhost my_app]$ which rails /usr/bin/rails [vineeth@localhost my_app]$ which gem /usr/bin/gem And when I run the script/server, this is the error. [vineeth@localhost my_app]$ script/server ./script/../config/boot.rb:9:in `require': no such file to load -- rubygems (LoadError) from ./script/../config/boot.rb:9 from script/server:2:in `require' from script/server:2 And the PATH file looks like this [vineeth@localhost my_app]$ cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH="/usr/local/bin:/usr/local/sbin:/usr/bin/ruby:$PATH" I suppose it is something to do with the PATH file. Let me know what I need to change here. If there are other changes I should make, please let me know, Thanks

    Read the article

  • How do I prevent a C# method from executing using an attribute validator?

    - by Boydski
    I'd like to create an attribute-based validator that goes a few steps beyond what I've seen in examples. It'll basically prevent methods or other functionality from executing. Please be aware that I'm having to use AzMan since I have no availability to Active Directory in this scenario. Here's some pseudo code of what what I'm looking for: // Attribute validator class AttributeUsage is arbitrary at this point and may include other items [AttributeUsage( AttributeTargets.Method | AttributeTargets.Property, AllowMultiple = true, Inherited = true )] public class PermissionsValidatorAttribute : Attribute { public PermissionsValidatorAttribute(PermissionEnumeration permission){...} public bool UserCanCreateAndEdit(){...} public bool UserCanDelete(){...} public bool UserCanUpload(){...} } Here's a sample of a class/member that'll be decorated. The method will not be executed at all if the PermissionValidator.UserCanDelete() doesn't return true from wherever it's executed: public class DoStuffNeedingPermissions { [PermissionValidator(PermissionEnumeration.MustHaveDeletePermission)] public void DeleteSomething(){...} } I know this is a simple, incomplete example. But you should get the gist of what I'm needing. Make the assumption that DeleteSomething() already exists and I'm preferring to NOT modify the code within the method at all. I'm currently looking at things like the Validation Application Block and am messing with custom attribute POC's. But I'd love to hear opinions with code samples from everyone out there. I'm also certainly not opposed to other methods of accomplishing the same thing such as extension methods or whatever may work to accomplish the same thing. Please remember I'm making the attempt to minimize changes to existing DoStuffNeedingPermissions code. Thanks everyone!

    Read the article

  • Is this trivial function silly?

    - by Chas. Owens
    I came across a function today that made me stop and think. I can't think of a good reason to do it: sub replace_string { my $string = shift; my $regex = shift; my $replace = shift; $string =~ s/$regex/$replace/gi; return $string; } The only possible value I can see to this is that it gives you the ability to control the default options used with a substitution, but I don't consider that useful. My first reaction upon seeing this function get called is "what does this do?". Once I learn what it does, I am going to assume it does that from that point on. Which means if it changes, it will break any of my code that needs it to do that. This means the function will likely never change, or changing it will break lots of code. Right now I want to track down the original programmer and beat some sense into him or her. Is this a valid desire, or am I missing some value this function brings to the table?

    Read the article

  • How to set up single array or dictionary for use in multiple datasources?

    - by Roman
    I have multiple TableViewDatasources that need to display list of objects form same pool depending of certain property. E.g. object.flag1 is set- it will show up in TableView1 object.flag2 is set- it will show up in TableView2 The obvious way would be to have separate arrays for each TableView, But same object may appear in different arrays. Also I need to update objects very often or access all objects through same array. How do I setup a single dictionary or array to have all objects in one structure? To put it in another way: When table view or selection changes, application need to redraw TableViews with the new data. Application have to access the pool of objects and search through them using iterator and accessing each object and its properties. I think that this is an expensive operation and want to avoid that. Perhaps maybe by making a global pool of objects a dictionary and exposing objects properties as dictionary fields. So instead of iterating global pool of objects I could access global pool Dicitonary in a manner of database by selecting objects that has fields that match particular criteria. Anyone know any example of doing that?

    Read the article

  • A question about DOM parser used with Python

    - by fixxxer
    I'm using the following python code to search for a node in an XML file and changing the value of an attribute of one of it's children.Changes are happening correctly when the node is displayed using toxml().But, when it is written to a file, the attributes rearrange themselves(as seen in the Source and the Final XML below). Could anyone explain how and why this happen? Python code: #!/usr/bin/env python import xml from xml.dom.minidom import parse dom=parse("max.xml") #print "Please enter the store name:" for sku in dom.getElementsByTagName("node"): if sku.getAttribute("name") == "store": sku.childNodes[1].childNodes[5].setAttribute("value","Delhi,India") print sku.toxml() xml.dom.ext.PrettyPrint(dom, open("new.xml", "w")) a part of the Source XML: <node name='store' node_id='515' module='mpx.lib.node.simple_value.SimpleValue' config_builder='' inherant='false' description='Configurable Value'> <match> <property name='1' value='point'/> <property name='2' value='0'/> <property name='val' value='Store# 09204 Staten Island, NY'/> <property name='3' value='str'/> </match> </node> Final XML : <node config_builder="" description="Configurable Value" inherant="false" module="mpx.lib.node.simple_value.SimpleValue" name="store" node_id="515"> <match> <property name="1" value="point"/> <property name="2" value="0"/> <property name="val" value="Delhi,India"/> <property name="3" value="str"/> </match> </node>

    Read the article

  • ado.net Concurrency violation

    - by Bicubic
    My first time using ADO.net. Trying to make database of Users. First I populate my DataSet: adapter.AcceptChangesDuringFill = true; adapter.AcceptChangesDuringUpdate = true; adapter.Fill(dataset); To create a user: User user = new User(); user.datarow = dataset.Users.NewUsersRow(); user.Name = username; user.PasswordHash = GetHash(password); user.Rights = UserRights.None; users.Add(user); dataset.Users.AddUsersRow(user.datarow); adapter.Update(dataset); When a user property is modified: adapter.Update(dataset); Creation by itself is fine. If I take an existing user and make multiple changes, fine. Multiple creations in a row, fine. Creation followed by a property change, I get this: "Concurrency violation: the UpdateCommand affected 0 of the expected 1 records." Any ideas?

    Read the article

  • TinyMce imagemanager won't generate image path when used with an iframe

    - by Tom
    I have successfully setup tinymce to work on a page within an iframe. Everything works perfectly. However, when you use imagemanager to pick an image to be inserted or replaced in the editor it will not copy the path(and filename) of the image to the "Image URL" input in the "Insert/edit image" box. The box will either remain empty or keep the address of the previous image. The behaviour is the same with the filemanager plugin. tinyMCE.init( { mode : "none", editor_selector : "mceEditor", theme : "advanced", plugins : "filemanager,imagemanager,autoresize,safari,pagebreak,style,layer,table,save,advhr,advimage,advlink,emotions,iespell,insertdatetime,preview,media,searchreplace,print,contextmenu,paste,directionality,fullscreen,noneditable,visualchars,nonbreaking,xhtmlxtras,template,inlinepopups,spellchecker", theme_advanced_buttons1 : "insertfile,insertimage,advimage,imagemanager,bold,italic,underline,|,justifyleft,justifycenter,justifyright,justifyfull,nonbreaking,cut,copy,paste,pastetext,pasteword,|,search,replace,|,bullist,numlist", theme_advanced_buttons2 : "blockquote,|,link,unlink,anchor,image,cleanup,help,code,|,insertdate,inserttime,|,forecolor,backcolor,|,charmap,iespell,media,advhr", theme_advanced_layout_manager : "SimpleLayout", theme_advanced_buttons3 : "tablecontrols,|,hr,removeformat,visualaid,|,sub,sup,strikethrough", theme_advanced_buttons4 : "styleselect,formatselect,fontselect,fontsizeselect,|,undo,redo,|,spellchecker", theme_advanced_toolbar_location : "external", theme_advanced_toolbar_align : "left", theme_advanced_statusbar_location : "bottom", relative_urls : true, document_base_url : "http://devtom.ecitb.org.uk/", auto_resize : true, content_css : "/custom/css/style.css", extended_valid_elements : "iframe[height|width|src|frameborder|scrolling]", }); /* The following code comes from- http://tinymce.moxiecode.com/punbb/viewtopic.php?id=12966 Without it the editor only loads 10% of the time. With it, it's pretty much 100% consistent. The other changes mentioned in the post have also been implemented. */ var setupTiny = function() { var ifrObj = document.getElementById('pageEditIFrame'); var win = ifrObj; if (win.contentWindow) { win = win.contentWindow; } var d; if(ifrObj.contentDocument) { d = ifrObj.contentDocument; } else if (ifrObj.contentWindow) { d = ifrObj.contentWindow.document; } else if (ifrObj.document) { d = ifrObj.document; } textAreas.each(function(txtEl) { tinyMCE.execCommand('mceAddFrameControl', false, { element_id : txtEl, window : win, doc : d }); }); }; //Waiting 1 second seems to make the editor load more reliably. setTimeout("setupTiny();",1000);

    Read the article

  • How to best configure a central repository/multiple central repositories for Mercurial?

    - by Mario
    I am new to Mercurial and trying to figure out if it could replace SVN. Everyone I work with has used SVN, CVS and VSS (shiver), so this could be quite a large change. I have been very interested after reading about its merge and branch capability, but have a few reservations. We are currently on SVN, and have one central repository. From my reading, it seems as though there is no ONE central repository for all projects when using Mercurial. NOTE: We consider each project a separate logical set of code, or a Visual Studio Solution. It runs on its own. We have around 60 separate projects in our one central SVN repository. After reading about Mercurial it seems to me that I have to create 60 separate central repositories for each one of these projects on the server. QUESTION #1: Should I create a single repository for each project? If yes, then I am worried about configuring and hosting 60 separate central Mercurial servers. I started thinking I could configure one file, but it seems as if each repository must be individually configured using the “C:...\MyRepository.hg\hgrc” file (Windows install). It also seems as I have to run 60 servers (hg serve), I would assume on different ports. QUESTION #2: If the answer to question 1 is yes, there should be a single central repository for each project, then how have people managed many multiple repositories? Finally, I haven’t looked into moving all history and changes from one SVN repository to a bunch of separate Mercurial repositories, but would appreciate any comments from someone who has done this (or if it is even possible).

    Read the article

  • What do you need to implement to provide a Content Set for an NSArrayController?

    - by whuuh
    Heys, I am writing something in Xcode. I use Core Data for persistency and link the view and the model together with Cocoa Bindings; pretty much your ordinary Core Data application. I have an array controller (NSArrayController) in my Xib. This has its managedObjectContext bound to the AppDelegate, as is convention, and tracks an entity. So far so good. Now, the "Content Set" biding of this NSArrayController limits its content set (as you'd expect), by a keyPath from the selection in another NSArrayController (otherAc.selection.detailsOfMaster). This is the usual way to implement a Master-Detail relationship. I want to variably change the key path at runtime, using other controls. This way, I sould return a content set that includes several other content sets, which is all advanced and beyond Interface Builder. To achieve this, I think I should bind the Content Set to my AppDelegate instead. I have tried to do this, but don't know what methods to implement. If I just create the KVC methods (objectSet, setObjectSet), then I can provide a Content Set for the Array Controller in the contentSet method. However, I don't think I'm binding this properly, because it doesn't "refresh". I'm new to binding; what do I need to implement to properly update the Content Set when other things, like the selection in the master NSArrayController, changes?

    Read the article

  • restrict script inside iframe to run only within pages of same top-level domain?

    - by Justin Grant
    I'd like to enforce a requirement that client script inside a page (which in turn is loaded inside an iframe of another page) will only run when the parent page is on the same top-level domain as the framed page (although it may be on another hostname in that domain). Is this do-able? I assume that the easy solution of looking at top.location.host won't be available due to cross-site scripting limitations, but I'm wondering if other javascript hackery could suffice. Constraints on any potential solution inculde: I need to be able to run XmlHttpRequest calls inside the child page, and I need to validate that the hostname is in the same domain before I make those calls. (this makes a document.domain solution challenging because AFAIK setting document.domain disables the ability to make XmlHttpRequest calls. I can control client-side script and HTML on both parent or child (and I can create new pages if needed), but I can't make any server-side code changes. I can't simulate the above via server-side calls or proxies, because the child page's hostname uses a forms auth system with hostname-scoped cookies that I can't get access to from the parent page since it's on a different hostname. I don't have enough control over the child-frame site to be able to put both sites behind the same reverse-proxy or load-balancer (which would enable me to put both sites on the same hostname). I don't actually need to access any UI inside the IFrame-- the iframe is invisible and I'm only using it to run javascript within the security context of a site on a different hostname from the parent page. So at this point I'm stumped. Got any ideas? I want to make sure I'm not overlooking an easy solution before giving up.

    Read the article

  • does sfWidgetFormSelect provide a string or an int of the selected item?

    - by han
    Hey guys, I'm having an annoying problem. I'm trying to find out what fields of a form were changed, and then insert that into a table. I managed to var_dump in doUpdateObjectas shown in the following public function doUpdateObject($values) { parent::doUpdateObject($values); var_dump($this->getObject()->getModified(false)); var_dump($this->getObject()->getModified(true)); } And it seems like $this-getObject()-getModified seems to work in giving me both before and after values by setting it to either true or false. The problem that I'm facing right now is that, some how, sfWidgetFormSelect seems to be saving one of my fields as a string. before saving, that exact same field was an int. (I got this idea by var_dump both before and after). Here is what the results on both var dumps showed: array(1) {["annoying_field"]=> int(3)} array(1) {["annoying_field"]=>string(1)"3"} This seems to cause doctrine to think that this is a modification and thus gives a false positive. In my base form, I have under $this->getWidgets() 'annoying_field' => new sfWidgetFormInputText(), under $this->setValidators 'annoying_field' => new sfValidatorInteger(array('required' => false)), and lastly in my configured Form.class.php I have reconfigured the file as such: $this->widgetSchema['annoying_field'] = new sfWidgetFormSelect(array('choices' => $statuses)); statuses is an array containing values like {""a", "b", "c", "d"} and I just want the index of the status to be stored in the database. And also how can I insert the changes into another database table? let's say my Log table? Any ideas and advice as to why this is happen is appreciated, I've been trying to figure it out and browsing google for various keywords with no avail. Thanks! Edit: ok so I created another field, integer in my schema just for testing. I created an entry, saved it, and edited it. this time the same thing happened!

    Read the article

  • ASP.NET output caching - dynamically update dependencies

    - by ColinE
    Hi All, I have an ASP.NET application which requires output caching. I need to invalidate the cached items when the data returned from a web service changes, so a simple duration is not good enough. I have been doing a bit of reading about cache dependencies and think I have the right idea. It looks like I will need to create a cache dependency to my web service. To associate the page output with this dependency I think I should use the following method: Response.AddCacheItemDependency(cacheKey); The thing I am struggling with is what I should add to the cache? The dependency my page has is to a single value returned by the web service. My current thinking is that I should create a Custom Cache Dependency via subclassing CacheDependency, and store the current value in the cache. I can then use Response.AddCacheItemDependency to form the dependency. I can then periodically check the value and for a NotifyDependencyChange in order to invalidate my cached HTTP response. The problem is, I need to ensure that the cache is flushed immediately, so a periodic check is not good enough. How can I ensure that my dependant object in the cache which represents the value returned by the web service is re-evaluated before the HTTP response is fetched from the cache? Regards, Colin E.

    Read the article

  • Removing an array dimension where the elements sum to zero

    - by James
    Hi, I am assigning a 3D array, which contains some information for a number of different loadcases. Each row in the array defines a particular loadcase (of which there are 3) and I would like to remove the loadcase (i.e. the row) if ALL the elements of the row (in 3D) are equal to zero. The code I have at the moment is: Array = zeros(3,5) % Initialise array Numloadcases = 3; Array(:,:,1) = [10 10 10 10 10; 0 0 0 0 0; 0 0 0 0 0;]; % Expand to a 3D array Array(:,:,2) = [10 10 10 10 10; 0 0 0 0 0; 0 0 0 0 0;]; Array(:,:,3) = [10 10 10 10 10; 0 0 0 0 0; 0 0 20 0 0;]; Array(:,:,4) = [10 10 10 10 10; 0 0 0 0 0; 0 0 20 0 0;]; % I.e. the second row should be removed. for i = 1:Numloadcases if sum(Array(i,:,:)) == 0; Array(i,:,:) = [] end end At the moment, the for loop I have to remove the rows causes an indexing error, as the size of the array changes in the loop. Can anyone see a work around for this? Thanks

    Read the article

  • In JSF - What is the correct way to do this? Two dropdown lists with dependency.

    - by Ben
    Hi, I'm making two dropdown lists in JSF which are dependent. Specifically, one list has all the languages and the second list contains values that are displayed in the currently selected language. I've implemented this by having the second list use information from a Hash and rebuilding that Hash in the setter of the currently selected language. JSF Code Bit: <rich:dropDownMenu value="#{bean.currentlySelectedLanguage}" id="languageSelector"> ... (binding to languages hash) ... <rich:dropDownMenu value="#{bean.currentlySelectedScript}" id="ScriptPullDown"> ... (binding to scripts hash) ... Backing Bean Code Bit: setCurrentlySelectedLanguage(String lang){ this.currentlySelectedLanguage = lang; rebuildScriptNames(lang); } I'm wondering if that's a good way of doing this or if theres a better method that I am not aware of. Thank you! EDIT - Adding info.. I used a a4j:support that with event="onchange" and ReRender="ScriptPullDown" to rerender the script pull down. I could probably add an action expression to run a method when the value changes. But is there a benefit to doing this over using code in the setter function?

    Read the article

  • Debugging fortran code in Eclipse with Photran and GDB debugger: missing symbols

    - by tvandenbrande
    I have a program, written in fortran90, previously successfully compiled on a compaq compiler and working, that I'm now trying to compile with gfortran. I can compile the code to an .exe and run it. It works fine until a certain point in the routine and then an error is thrown. My current configuration: Windows 7 Eclipse Juno with CDT Photran Cygwin installation with gfortran compiler and GDB debugger (gdb.exe) Configurations for the debugger: GDB command set: Standard (Windows) Protocol: mi Shared libraries: don't load shared library symbols automatically (when activating this, no changes are noted). When running the debug command I get the following output: .gdbinit: No such file or directory. Reading symbols from /cygdrive/c/Users/thys/Documents/doctoraat/12_in progress/Hamfem/Debug/Hamfem.exe...done. auto-solib-add on Undefined command: "auto-solib-add". Try "help". Warning: C:/Users/thys/Documents/doctoraat/12_in progress/Hamfem/Hamfem/in: No such file or directory. [New Thread 5816.0x1914] [New Thread 5816.0x654] Basicly that leaves me with 2 questions: Where can I find the .gdbinit file in the cygwin installation? Are there any other possible errors in my setup, or points to think about?

    Read the article

  • Can I stop the dbml designer from adding a connection string to the dbml file?

    - by drs9222
    We have a custom function AppSettings.GetConnectionString() which is always called to determine the connection string that should be used. How this function works is unimportant to the discussion. It suffices to say that it returns a connection string and I have to use it. I want my LINQ to SQL DataContext to use this so I removed all connection string informatin from the dbml file and created a partial class with a default constructor like this: public partial class SampleDataContext { public SampleDataContext() : base(AppSettings.GetConnectionString()) { } } This works fine until I use the designer to drag and drop a table into the diagram. The act of dragging a table into the diagram will do several unwanted things: A settings file will be created A app.config file will be created My dbml file will have the connection string embedded in it All of this is done before I even save the file! When I save the diagram the designer file is recreated and it will contain its own default constructor which uses the wrong connection string. Of course this means my DataContext now has two default constructors and I can't build anymore! I can undo all of these bad things but it is annoying. I have to manually remove the connection string and the new files after each change! Is there anyway I can stop the designer from making these changes without asking? EDIT The requirement to use the AppSettings.GetConnectionString() method was imposed on me rather late in the game. I used to use something very similar to what it generates for me. There are quite a few places that call the default constructor. I am aware that change them all to create the data context in another way (using a different constructor, static method, factory, ect..). That kind of change would only be slightly annoying since it would only have to be done once. However, I feel, that it is sidestepping the real issue. The dbml file and configuration files would still contain an incorrect, if unused, connection string which at best could confuse other developers.

    Read the article

  • Associate new Authlogic Model to existing Models

    - by BriteLite
    Hello, While playing around with Rails (since I am a newbie) while reading Agile Rails book I came across an issue using the Gem Authlogic that I don't know how to address. I have a simple business Model. The tables store the following information: Name, Address, Latitude, and Longitude. The above approach has been working fine, because using the console I can enter the information and it shows up, where I need it to. My issue now is that I want to add authentication to it. As in assign those records in the table, to individual accounts. Since Authlogic is an authentication gem, can this be done? What I am trying to get to here is that, I enter a few records and leave it at that. Few days later, I want to assign those individual rows in the table to an authlogic model so the person to whom the record should belong can authenticate to it and make changes. Any code samples, blog posts to better help me understand would be great! Thank You.

    Read the article

  • How to create databinding over two xaml files?

    - by BionicGecko
    Hello, I am trying to come to a working understanding of how databinding works, but even after several tutorials I only have a basic understanding of how databinding works. Thus this question might seem fundamental to those more familiar with silverlight. Even if it is trivial, please point me to some tutorial that deals with this problem. All that I could find simply solved this via adding the data binding on a parent page.xaml (that i must not use in my case). For the sake of this example let us assume, that we have 5 files: starter.cs button1.xaml + codeBehind button2.xaml + codeBehind The two buttons are generated in code in the starter(.cs) file, and then added to some MapLayer button1 my_button1 = new button1(); button2 my_button1 = new button2(); someLayer.Children.Add(my_button1); someLayer.Children.Add(my_button2); My aim is to connect the two buttons, so that they always display the same "text" (i.e. my_button1.content==my_button2.content = true;). Thus when something changes my_button1.content this change should be propagated to the other button (two way binding). At the moment my button1.xaml looks like this: <Grid x:Name="LayoutRoot"> <Button x:Name="x_button1" Margin="0,0,0,0" Content="{Binding ElementName=x_button2, Path=Content}" ClickMode="Press" Click="button1_Click"/> </Grid> But everthing that i get out of that is a button with no content at all, it is just blank as the binding silently fails. How could I create the databinding in the context I described? Preferably in code and not XAML ;) Thanks in advance

    Read the article

  • Does Github.com have to create a merge commit when you merge from a fork ?

    - by Nishant
    I cloned the master and started doing he my work . Due to permissions I push the branch to my fork . I then sent a pull request to my master and someone with permission does the merge . I notice that Github.com creates a merge commit snapshot which to me looks like just a diff of the entire changes which is actually not necessary but helpful in the sense I can just look at merge commit to see the entire diff . I can see the same sha has as my own branch - hence it looks like the merge is an extra commit which probably aint nexeccary since its a fast forward ? master - a myfork(computer) - a->b->c myfork(github) - a->b->c Pull request myfork - master (which it says I can automatically merge) shows the entire diff and then when I merge it , it shows up as master - a->b->c-d . The d is a merge commit which I think it not really required because it is a fast forward ? Can someone explain why does this happen ? I think this is the same scenario if I rebase master if master had gone ahead , but that has not happened . Master is still at when I merge .

    Read the article

  • jQuery - how to repeat a function within itself to include nested files

    - by brandonjp
    I'm not sure what to call this question, since it involves a variety of things, but we'll start with the first issue... I've been trying to write a simple to use jQuery for includes (similar to php or ssi) on static html sites. Whenever it finds div.jqinclude it gets attr('title') (which is my external html file), then uses load() to include the external html file. Afterwards it changes the class of the div to jqincluded So my main index.html might contain several lines like so: <div class="jqinclude" title="menu.html"></div> However, within menu.html there might be other includes nested. So it needs to make the first level of includes, then perform the same action on the newly included content. So far it works fine, but it's very verbose and only goes a couple levels deep. How would I make the following repeated function to continually loop until no more class="jqinclude" elements are left on the page? I've tried arguments.callee and some other convoluted wacky attempts to no avail. I'm also interested to know if there's another completely different way I should be doing this. $('div.jqinclude').each(function() { // begin repeat here var file = $(this).attr('title'); $(this).load(file, function() { $(this).removeClass('jqinclude').addClass('jqincluded'); $(this).find('div.jqinclude').each(function() { // end repeat here var file = $(this).attr('title'); $(this).load(file, function() { $(this).removeClass('jqinclude').addClass('jqincluded'); $(this).find('div.jqinclude').each(function() { var file = $(this).attr('title'); $(this).load(file, function() { $(this).removeClass('jqinclude').addClass('jqincluded'); }); }); }); }); }); });

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >