Search Results

Search found 15041 results on 602 pages for 'breaking changes'.

Page 172/602 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Primefaces, JavaScript, and JSF does not work well together or am I doing something wrong

    - by Harry Pham
    Here is something so simple <p:commandLink value="Tom" onclick="document.getElementById('tom').focus()"/><br/> <input id="tom"/> When u click on the Tom, the textbox get focus. Great, now try this <p:commandLink value="Tom" onclick="document.getElementById('tom').focus()"/><br/> <h:inputText id="tom"/> <br/> when I click nothing happen, I check firebug, I see document.getElementById("tom") is null When I try to use jQuery $('#tom').focus(), nothing happen, no error, but did not get focus either. This is the response (not sure if this is the response from the server) when I see from firebug <?xml version="1.0" encoding="utf-8"?> <partial-response> <changes> <update id="javax.faces.ViewState"><![CDATA[455334589763307998:-2971181471269134244]]></update> </changes> <extension primefacesCallbackParam="validationFailed">{"validationFailed":false}</extension> </partial-response>

    Read the article

  • Can't compile CentOS 5, Ruby 1.9.2 and OpenSSL 1.0.0c

    - by pstinnett
    I'm trying to install Ruby 1.9.2 on CentOS 5.5. I get through most of the make process, but when it tries to compile OpenSSL I get an error. Below is the errror outputted: compiling openssl make[1]: Entering directory `/sources/ruby-1.9.2-p136/ext/openssl' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/openssl -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O3 -ggdb -Wextra -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-missing-field-initializers -Wno-long-long -o ossl_x509.o -c ossl_x509.c In file included from ossl.h:201, from ossl_x509.c:11: openssl_missing.h:71: error: conflicting types for ‘HMAC_CTX_copy’ /usr/include/openssl/hmac.h:102: error: previous declaration of ‘HMAC_CTX_copy’ was here openssl_missing.h:95: error: conflicting types for ‘EVP_CIPHER_CTX_copy’ /usr/include/openssl/evp.h:459: error: previous declaration of ‘EVP_CIPHER_CTX_copy’ was here make[1]: *** [ossl_x509.o] Error 1 make[1]: Leaving directory `/sources/ruby-1.9.2-p136/ext/openssl' make: *** [mkmain.sh] Error 1 Any help would be greatly appreciated! I'm not a master at Linux by any means, but I was able to successfully install this version of Ruby on our dev server. Our live server is running a newer version of OpenSSL which I'm assuming is why it's breaking. Just not sure what the fix is!

    Read the article

  • MSBuild 2010 - how to publish web app to a specific location (nant)?

    - by Mr. Flibble
    I'm trying to get MSBuild 2010 to publish a web app to a specific location. I can get it to publish the deployment package to a particular path, but the deployment package then adds it's own path that changes. For example: if I tell it to publish to C:\dev\build\Output\Debug then the actual web files end up at C:\dev\build\Output\Debug\Archive\Content\C_C\code\sawadee\frontend\IPP-FrontEnd\Source\ControllersViews\obj\Debug\Package\PackageTmp And the C_C part of the path changes (not sure how it chooses this part of the path). This means I can't just script a copy from the publish location. I'm using this nant/msbuild command at the moment: <target name="compile" description="Compiles"> <msbuild project="${name}.sln"> <property name="Platform" value="Any CPU"/> <property name="Configuration" value="Debug"/> <property name="DeployOnBuild" value="true"/> <property name="DeployTarget" value="Package"/> <property name="PackageLocation" value="C:\dev\build\Output\Debug\"/> <property name="AutoParameterizationWebConfigConnectionStrings" value="false"/> <property name="PackageAsSingleFile" value="false"/> </msbuild> Any ideas on how to get it to send the web files directly to a specific location?

    Read the article

  • How to extend an 'unloadable' Rails plugin?

    - by Vitaly Kushner
    I'm trying to write a plugin that will extend InheritedResources. Specifically I want to rewrite some default helpers. And I'd like it to "just work" once installed, w/o any changes to application code. The functionality is provided in a module which needs to be included in a right place. The question is where? :) The first attempt was to do it in my plugin's init.rb: InheritedResources::Base.send :include, MyModule It works in production, but fails miserably in development since InheritedResource::Base declared as unloadable and so its code is reloaded on each request. So my module is there for the first request, and then its gone. InheritedResource::Base is 'pulled' in again by any controller that uses it: Class SomeController < InheritedResource::Base But no code is 'pulling in' my extension module since it is not referenced anywhere except init.rb which is not re-loaded on each request So right now I'm just including the module manually in every controller that needs it which sucks. I can't even include it once in ApplicationController because InheritedResources inherites from it and so it will override any changes back. update I'm not looking for advice on how to 'monkey patch'. The extension is working in production just great. my problem is how to catch moment exactly after InheritedResources loaded to stick my extension into it :) update2 another attempt at clarification: the sequence of events is a) rails loads plugins. my plugin loads after inherited_resources and patches it. b) a development mode request is served and works c) rails unloads all the 'unloadable' code which includes all application code and also inherited_resources d) another request comes in e) rails loads controller, which inherites from inherited resources f) rails loads inherited resources which inherit from application_controller g) rails loads application_contrller (or may be its already loaded at this stage, not sure) g) request fails as no-one loaded my plugin to patch inherited_resources. plugin init.rb files are not reloaded I need to catch the point in time between g and h

    Read the article

  • Advice needed: ADSL and VPN for a small company

    - by Saajid Ismail
    Hi. I need advice on purchasing an ADSL modem/router for a small company. At the moment, we are using the iBurst Wireless service for internet connectivity. I have the iBurst desktop modem, which connects to my Netgear WNR2000 router via ethernet. I am using the Netgear WNR2000 to deploy a wireless network as well. I have also set up a VPN using Windows Server 2003, and enabled the VPN Passthrough settings on the Netgear router. I am able to connect to the office network remotely without difficulty. However the problem that I've read is that the Netgear WNR2000 only supports VPN passthrough for a single session. This is simply not good enough. I need to be able to support at least 3 concurrent VPN connections immediately, and up to 5 in the near future. Now I am cancelling my iBurst Wireless service and have just got my ADSL line installed. I have to purchase an ADSL modem, and now is a good time to think of future proofing my investment. I need a good ADSL modem, that will allow me to support at least 5 concurrent VPN connections, or more, without breaking the bank. My budget is about 150-200 USD. I believe that my current Netgear WNR2000 router will be useless, except maybe to extend my wireless network in the future by a bit. Is there a solution where I can still use my Netgear WNR2000 for WiFi, for e.g., by connecting a cheaper non-WiFi ADSL modem to the Netgear router? If not, then which WiFi-enabled ADSL modem/router that supports at least 5 VPN passthroughs can you recommend? To sum it up, I need an ADSL modem/router that is: ADSL & ADSL2+ compatible has built-in 802.11n 270/300mbps WiFi (if having this feature doesn't push the price up too much) supports at least 5 VPN connections using VPN passthrough EDIT: Answer 2.10 in the following FAQ has me a bit worried - What is VPN/multiple VPN Pass-through?

    Read the article

  • Virtual Machines Renaming/Backing up/Sharing

    - by evan
    I've started using VMware virtual machines for all of my software development projects and have a few questions for others doing the same thing. First, how can you rename the virtual machine and the name of the virtual hard drive? I have a base development machine that I clone for different projects. I'd like to name the machine and it's hard drive according the the project (right now when I copy them via cut and paste, the file names remain the same and I can only organize them by putting them in a specific directory). Second, what is the best way to back up a virtual machine? Is it possible (by breaking the virtual hard drive up into chunks instead of one big file) to get incremental backups working? It seems time machine always tries to make a copy of the whole thing which is time consuming because each virtual machine is around 30GB. Finally, how slow would it be to have a virtual machine shared on an NFS mount on a wireless N network and used from multiple computers (but with only one person using it at a time.) Would it be more reasonable on a gigabit lan connection? Thanks for your input! And please feel free to share any advice or wisdom about using virtual machines for software development and the best ways to speed them up!

    Read the article

  • JEE6 Global JNDI Name and Maven Deployment

    - by wobblycogs
    I'm having some problems with the global JNDI names of my EJB resources which is (or at least will) cause my JNDI look ups to fail. The project is being developed on Netbeans and is a standard Maven Web Application. When my application is deployed to GF3.0 the application name is set to something like: com.example_myapp_war_1.0-SNAPSHOT which is all well and good from Netbeans point of view because it ensures the name is unique but it also means all the EJBs get global names such as this: java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService This, of course, is going to cause problems because every time the version changes all the global names change (I've tested this by changing the version and the names indeed changed). The name is being generated from the POM file and it's a concatenation of: <groupId>com.example</groupId> <artifactId>myapp</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> Up until now I've got away with just injecting all the resources using @EJB but now I need to access the CustomerService EJB from a JSF Converter so I'm doing a JNDI look up like this: try { Context ctx = new InitialContext(); CustomerService customerService = (CustomerService)ctx.lookup( "java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService" ); return customerService.get(submittedValue); } catch( Exception e ) { logger.error( "Failed to convert customer.", e ); return null; } which will clearly break when the application is properly released and the module name changes. So, the million dollar question: how can I set the modle name in maven or how do I recover the module name so that I can programatically build the JNDI name at runtile. I've tried setting it in the web.xml file as suggested by that link but it was ignored. I think I'd rather build the name at runtime as that means there is less scope for screw ups when the application is deployed. Many thanks for any help, I've been tearing my hair out all day on this.

    Read the article

  • Windows 7 migration led to crashdump and hibernate problems

    - by MartyMacGyver
    Note: I'm using a Samsung 830 SSD (migrated OS from defunct PC) and other than these two (interrelated?) problems it's working fine. Surprisingly well actually. Motherboard is a ASUS P8Z77-V Deluxe. Problem 1: Crashdumps are not working. volmgr throws an event 45 "The system could not sucessfully load the crash dump driver." whenever you modify crashdump settings, or if a crashdump occurs. diskpart says that "Crashdump disk = no" which is peculiar. Problem 2: Hibernation isn't working. Again, volmgr throws the same event 45 if you try to hibernate. The screen blanks, then you're at the password prompt. No sleepage occurs. (Yes, I know I should avoid hibernation on SSDs but it's enabled and the hibernation file is definitely there so I'd like to know why it's failing). Diskpart claims "Hibernation file = no" which is again peculiar... it's plainly there and getting created by the system. The common factor appears to be volmgr and/or the crashdump "service" (if that's what it is). I'd much rather get this working than spend days reinstalling and reconfiguring the entire system, especially when it's working perfectly otherwise. Sleep works as well (as long as it's not hybrid sleep). So, what defines the flags "Crashdump disk" and "Hibernation file disk" in diskpart's output? And what might be going wrong that's breaking crashdumps in particular?

    Read the article

  • How to extend an 'unloadable' Rails plugin?

    - by Vitaly Kushner
    I'm trying to write a plugin that will extend InheritedResources. Specifically I want to rewrite some default helpers. And I'd like it to "just work" once installed, w/o any changes to application code. The functionality is provided in a module which needs to be included in a right place. The question is where? :) The first attempt was to do it in my plugin's init.rb: InheritedResources::Base.send :include, MyModule It works in production, but fails miserably in development since InheritedResource::Base declared as unloadable and so its code is reloaded on each request. So my module is there for the first request, and then its gone. InheritedResource::Base is 'pulled' in again by any controller that uses it: Class SomeController < InheritedResource::Base But no code is 'pulling in' my extension module since it is not referenced anywhere except init.rb which is not re-loaded on each request So right now I'm just including the module manually in every controller that needs it which sucks. I can't even include it once in ApplicationController because InheritedResources inherites from it and so it will override any changes back.

    Read the article

  • jQuery / jEditable and jQuery UI Dialog. Re-populating table cell.

    - by solefald
    Hello, this is my first time using JS, so i really have no idea what i am doing. I have a table with data, and using jEditable to allow a user to edit the data and POST it to my php script. When user hits "Save", it opens jQuery UI with the data from php script and it seems to work fine, however, if the user hits Cancel or Esc to close the jQuery UI dialog, they and up back on the page, but the cell they tried to edit is now completely empty. How do i get the cell re-populated? Thank you! $(".editable_textarea").editable("/path/to/my/scrip.php", { indicator : "<img src='/images/indicator.gif'>", type : 'textarea', submitdata: { _method: "post", uri: "<?php echo $this->uri->segment(3); ?>" }, select : false, submit : 'Save', cancel : 'Cancel', tooltip : "Click to edit...", cssclass : "editable", onblur: "cancel", cssclass: "edit_destination", callback: function(value, settings) { $(this).dialog({ autoOpen: true, width: 400, modal: true, position: "center", resizable: false, draggable: false, title: "Pending Changes", buttons: { "Cancel": function() { $(this).dialog("close"); }, "Confirm Changes": function() { document.findSameDestination.submit(); } } }); $('form#findSameDestination').submit(function(){ $(this).dialog('open'); return false; }); $(this).editable("disable"); }, data: function(value, settings) { var retval = value.replace(/<br[\s\/]?>/gi, '\n'); retval = retval.replace(/(<([^>]+)>)/gi, ''); return retval; } });

    Read the article

  • Microsoft Sync Framework with Mutliple Client and Server Database

    - by David Osborn
    I'm working on changes to an application to have a local data store instead of connecting directly to the database server. The client app will then sync changes both ways between the server database and its local data store. I'm trying to implement the syncing solution using Microsoft Sync Framework. To be clear there is no server application, only the database server, but there are multiple clients. I have started implementing two classes that implement FullEnumerationSimpleSyncProvider, one called LocalSyncProvider and one called ServerSyncProvider. The LocalSyncProvider knows how to enumerate, insert, update, delete items from the local data store, and the ServerSyncProvider knows how to enumerate, insert, update, delete items from the database server. They each use the provided SqlMetadataStore and store the two metadata stores (one for the local and one for the server) on the local drive of the client. This means that each client will have its own metadata store for its local data store and its own metadata store for the server database. Is this actually going to work, does there need to exists only one server metadata store, or do I need to be using the Sync Framework differently?

    Read the article

  • Some questions about slugs in ASP.NET MVC content system

    - by mare
    In my application I'm currently using forms which allow to enter Title and Slug fields. Now I've been struggling with the slugs thing for a while because I can never decide once and for all how to handle them. 1) Make Title and Slug indenpendent Should I allow users to enter both Title and Slug separetely? This is what I had first. I also had an option that if user did not enter the Slug it was derived from the Title. If both were enter, the Slug field took precedence. 2) Derive Slug from Title, when content is inserted that's it for the Slug, no more changes Then, I switched to only Title field and derive Slug from title. While doing it I found out that now I have to change all the forms that allowed user to enter a slug. This way of doing it also prevents users to change Slugs - they can change the Title but it has no effect on the Slug. You can think of it like Slug is uniqued ID. 3) Now I'm thinking again of allowing users to change slug Though I don't think it is all that useful. How many times does the content that someone already added, spent time on writing it, even require a change of either Title or slug? I don't think it is that many times. The biggest problem with the 3rd option is that If I use Slugs as IDs, I need to update the reference all over the place when Slug changes. Or maintain a table that would contain somekind of Slug history. What are you thoughts on these, I hope, valid questions?

    Read the article

  • Flex 3 Value aware combo box error

    - by user1057094
    I am using a value aware combobox, it was working fine, but recently i started getting the below error, when I try to click on combobox, and the error is random. I am not sure of it is because of any changes i have done in the coding, or some changes in data provider etc any help is appreciated... TypeError: Error #1009: Cannot access a property or method of a null object reference. at mx.controls::ComboBox/destroyDropdown() at mx.controls::ComboBox/styleChanged() at mx.core::UIComponent/setBorderColorForErrorString() at mx.core::UIComponent/commitProperties() at mx.controls::ComboBase/commitProperties() at mx.controls::ComboBox/commitProperties() at custom.controls::ComboBox/commitProperties()[D:\workspace\eclipse\indigo\ams\flex_src\custom\controls\ComboBox.mxml:13] at mx.core::UIComponent/validateProperties() at mx.managers::LayoutManager/validateProperties() at mx.managers::LayoutManager/doPhasedInstantiation() Debugger throws TypeError: Error #1009: Cannot access a property or method of a null object reference. at mx.controls::ComboBox/destroyDropdown() at mx.controls::ComboBox/styleChanged() at mx.core::UIComponent/setBorderColorForErrorString() at mx.core::UIComponent/commitProperties() at mx.controls::ComboBase/commitProperties() at mx.controls::ComboBox/commitProperties() at custom.controls::ComboBox/commitProperties()[D:\workspace\eclipse\indigo\ams\flex_src\custom\controls\ComboBox.mxml:13] at mx.core::UIComponent/validateProperties() at mx.managers::LayoutManager/validateProperties() at mx.managers::LayoutManager/doPhasedInstantiation() at mx.managers::LayoutManager/validateNow() at mx.controls::ComboBox/displayDropdown() at mx.controls::ComboBox/downArrowButton_buttonDownHandler() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at mx.core::UIComponent/dispatchEvent() at mx.controls::Button/http://www.adobe.com/2006/flex/mx/internal::buttonPressed() at mx.controls::Button/mouseDownHandler() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at mx.core::UIComponent/dispatchEvent() at mx.controls::ComboBase/textInput_mouseEventHandler()

    Read the article

  • SubmitChanges doesn't save but removes inserts from change set, no errors

    - by winston schröder
    Hi Everybody, I have a deeper question regarding debug functionality of Linq to Sql SubmitChanges() Function. I want to save a record in a table of a locally cached db (localdbcache: server SqlExpress 2008 client SqlCE). Before calling SubmitChanges I can find the new item via DataContext.GetChangeSet(). After calling Submit Changes, the items to insert have been removed from the ChangeSet. (That's what this function is supposed to do.) There are no Changes Conflicts and no error in the db's log output. No Exception at all. The table's Count stays at the same value. if ((e.Parameter == null) || (!e.Parameter.GetType().Equals(typeof(LibDB.Client.Vehicles)))) return; LibDB.Client.Vehicles tmp = e.Parameter as LibDB.Client.Vehicles; try { ChangeSet cs = this._dc.GetChangeSet(); if ((tmp == null) || (this._dc == null)) return; if (this._dc.Vehicles.Where(veh => veh.Vin == tmp.Vin).Count() == 0) this._dc.Vehicles.InsertOnSubmit(tmp); else if (this._dc.Vehicles.Where(veh => veh.Vin == tmp.Vin).Count() == 1) this._dc.Vehicles.Attach(tmp, true); else return; using (TransactionScope ts = new TransactionScope()) { try { this._dc.SubmitChanges(); //this._dc.Refresh(RefreshMode.OverwriteCurrentValues, this._dc.Vehicles); } catch (Exception ex) { Console.WriteLine(ex.Message); } } if (this._dc.Vehicles.Where(veh => veh.Vin == tmp.Vin).Count() == 1) MessageBox.Show("Vehicle not saved."); this.vehSelector.ResetLayout(); } I would appreciate any help since I'm loosing hope to find any error, Thanks in Advance Winston

    Read the article

  • Increment sequence when updating freebusy in icalendar?

    - by user302038
    I'm new to icalendar, but I couldn't find an answer to this specific situation. My web site has a shared calendar where users can add and update events. They can also indicate on the web site if they will be attending an event. I want to add a button to let them download the calendar as published icalendar events. When they indicated they will be attending an event, I want the event in the icalendar file to be set as "busy", otherwise as "not busy". So I can easily increment the sequence number on the event if the event is changed, but must I increment the sequence number if the user changed whether or not they will attend the event? I don't think that is a "mandatory" reason to increment the sequence, but I'm not sure. What will happen if the user changes their busy status for an event and re-downloads without the sequence number changing? I suppose I can keep a separate sequence number for each time the user changes their attendance status, then add the event sequence number and attendance sequence number together for the "final" event sequence number that's downloaded, but do I have to do it that way?

    Read the article

  • How to implement a .net 3-tier architecture using Winforms

    - by Anders Jakobsen
    I have for some time build n-tier Applications using a database server as the data tier, Winforms as the presentation tier and an ASP.NET asmx webservice in the middle to send back and forth untyped Datasets. While this approach has worked for me so far, it certainly does feel outdated today. What technologies should I use if I were to create a similar architectured application today? .net 4.0 technology is welcome. I still want a database server as the datatier and the asmx webservices should probably be replaced by WCF. I would still like to have the presentation tier running as a desktop application (Winforms or WPF) so ignore ASP.net for this question. My main question really comes down to what to use as business objects. I want something that is easier to bind to the interface than untyped Datasets and strongly-typed datasets feels very heavy. I also need something that can track changes to make sure users do not override each other's changes in the database. Will the Entity Framework 4 be usable for a scenario like this? Are there any thorough guides available?

    Read the article

  • git-p4 submit fails with "Not a valid object name HEAD~261"

    - by Harlan
    I've got a git repository that I'd like to mirror to a Perforce repository. I've downloaded the git-p4 script (the more recent version that doesn't give deprecation warnings), and have been working with that. I've figured out how to pull changes from Perforce, but I'm getting an error when I try to sync changes from the git repo back. Here's what I've done so far: git clone [email protected]:asdf/qwerty.git git-p4 sync //depot/path/to/querty git merge remotes/p4/master (there was a single README file...) So, I've copied the origin to a clean, new director, got a lovely looking merged tree of files, and git status shows I'm up-to-date. But: > git-p4 submit fatal: Not a valid object name HEAD~261 Command failed: git cat-file commit HEAD~261 This thread on the git mailing list seems to be relevant, but I can't figure out what they're doing with all the A, B, and Cs. Could someone please clarify what "Not a valid object name" means, and what I can do to fix the problem? All I want to do is to periodically snapshot the origin/master into Perforce; a full history is not required. Thanks.

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • Printer deployment via Group Policy not working on a single system

    - by Aron Rotteveel
    One of my coworkers just got a new laptop running Windows 7 Pro x64. We use a GPO to deploy the printers to every system, but for some reason it is not working on this system. I have been breaking my head over this for the past 3 hours now without any result. The strange thing is that gpresult /H seems to indicate that the GPO did run. The hardware: Laptop: Windows 7 Professional x64 Print server: Windows Server 2008 x64 R1 HP Color LaserJet 2605dn HP LaserJet P2015 Driver packages on server: HP universal printer driver PCL5, both X86 as X64 Oddities and other info: GPO working flawlessly on every other system, including my own Windows 7 Ultimate X64 laptop gpresult /H shows the GPO being ran Windows Firewall completely disabled on the new laptop Below is the output for gpresult /H (in Dutch sadly, but I think you'll recognize it): Beleidsregels Windows-instellingen Printerverbindingen Pad Dominerend groepsbeleidsobject \\Server2008\HP Color LaserJet 2605dn Printers \\Server2008\HP LaserJet P2015 Printers Beheersjablonen Beleidsdefinities (ADMX-bestanden) opgehaald van de lokale computer. Configuratiescherm/Printers Beleid Instelling Dominerend groepsbeleidsobject Beperkingen van point-and-print Uitgeschakeld Printers Like I said, I have been trying to figure this out for the past few hours or so without any result, so you are my last hope. Any help is appreciated.

    Read the article

  • XML Schema Migration

    - by Corwin Joy
    I am working on a project where we need to save data in an XML format. The problem is, over time we expect the format / schema for our data to change. What we want to be able to do is to produce scripts to migrate our data across different schema versions. We distribute our product to thousands of customers so we need to be able to run / apply these scripts at customer sites (so we can't just do the conversions by hand). I think that what we are looking for is some kind of XML data migration tool. In my mind the ideal tool could: Do an "XML diff" of two schema to identify added/deleted/changed nodes. Allow us to specify transformation functions. So, for example, we might add a new element to our schema that is a function of the old elements. (E.g. a new element C where C = A+B, A + B are old elements). So I think I am looking for a kind of XML diff and patch tool which can also apply transformation functions. One tool I am looking at for this is Altova's MapForce . I'm sure others here have had to deal with XML data format migration. How did you handle it? Edit: One point of clarification. The "diff" I plan to do is on the schema or .xsd files. The actual changes will be made to particular data sets that follow a given schema. These data sets will be .xml files. So its a "diff" of the schema to help figure out what changes need to be made to data sets to migrate them from one scheme to another.

    Read the article

  • Can not call web service with basic authentication using WCF

    - by RexM
    I've been given a web service written in Java that I'm not able to make any changes to. It requires the user authenticate with basic authentication to access any of the methods. The suggested way to interact with this service in .NET is by using Visual Studio 2005 with WSE 3.0 installed. This is an issue, since the project is already using Visual Studio 2008 (targeting .NET 2.0). I could do it in VS2005, however I do not want to tie the project to VS2005 or do it by creating an assembly in VS2005 and including that in the VS2008 solution (which basically ties the project to 2005 anyway for any future changes to the assembly). I think that either of these options would make things complicated for new developers by forcing them to install WSE 3.0 and keep the project from being able to use 2008 and features in .NET 3.5 in the future... ie, I truly believe using WCF is the way to go. I've been looking into using WCF for this, however I'm unsure how to get the WCF service to understand that it needs to send the authentication headers along with each request. I'm getting 401 errors when I attempt to do anything with the web service. This is what my code looks like: WebHttpBinding webBinding = new WebHttpBinding(); ChannelFactory<MyService> factory = new ChannelFactory<MyService>(webBinding, new EndpointAddress( "http://127.0.0.1:80/Service/Service/")); factory.Endpoint.Behaviors.Add(new WebHttpBehavior()); factory.Credentials.UserName.UserName = "username"; factory.Credentials.UserName.Password = "password"; MyService proxy = factory.CreateChannel(); proxy.postSubmission(_postSubmission); This will run and throw the following exception: "The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Basic realm=realm'." And this has an inner exception of: "The remote server returned an error: (401) Unauthorized." Any thoughts about what might be causing this issue would be greatly appreciated.

    Read the article

  • mvvm - prismv2 - INotifyPropertyChanged

    - by depictureboy
    Since this is so long and prolapsed and really doesnt ask a coherent question: 1: what is the proper way to implement subproperties of a primary object in a viewmodel? 2: Has anyone found a way to fix the delegatecommand.RaiseCanExecuteChanged issue? or do I need to fix it myself until MS does? For the rest of the story...continue on. In my viewmodel i have a doctor object property that is tied to my Model.Doctor, which is an EF POCO object. I have onPropertyChanged("Doctor") in the setter as such: Private Property Doctor() As Model.Doctor Get Return _objDoctor End Get Set(ByVal Value As Model.Doctor) _objDoctor = Value OnPropertyChanged("Doctor") End Set End Property The only time OnPropertyChanged fires if the WHOLE object changes. This wouldnt be a problem except that I need to know when the properties of doctor changes, so that I can enable other controls on my form(save button for example). I have tried to implement it in this way: Public Property FirstName() As String Get Return _objDoctor.FirstName End Get Set(ByVal Value As String) _objDoctor.FirstName = Value OnPropertyChanged("Doctor") End Set End Property this is taken from the XAMLPowerToys controls from Karl Shifflet, so i have to assume that its correct. But for the life of me I cant get it to work. I have included PRISM in here because I am using a unity container to instantiate my view and it IS a singleton. I am getting change notification to the viewmodel via eventaggregator that then populates Doctor with the new value. The reason I am doing all this is because of PRISM's DelegateCommand. So maybe that is my real issue. It appears that there is a bug in DelegateCommand that does not fire the RaiseCanExecuteChanged method on the commands that implement it and therefore needs to be fired manually. I have the code for that in my onPropertyChangedEventHandler. Of course this isnt implemented through the ICommand interface either so I have to break and make my properties DelegateCommand(of X) so that I have access to RaiseCanExecuteChanged of each command.

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • StarTeam trunk.

    - by Nix
    I have the unfortunate opportunity of source control via Borland's StarTeam. It unfortunately does very few things well, and one supreme weakness is its view management. I love SVN and come from an SVN mindset. Our issue is post production release we are spending countless hours merging changes into a "production support" environment. Please do not harass me this was not my doing, I inherited it and am trying to present a better way of managing the repository. It is not an option to switch to a different SCM tool. Current setup Product.1.0 (TRUNK, current production code, and at this level are pending bug fixes) Product.2.0(true trunk anything checked in gets tested, and then released next production cycle, a lot of changes occur in this view) My proposal is going to be to swap them, have all development be done on the trunk (Production), tag on releases, and as needed create child views to represent production support bug fixes. Production Production.2.0.SP.1 I can not find any documentation to support the above proposal so I am trying to get feedback on whether or not the change is a good idea and if there is anything you would recommend doing differently.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >