Search Results

Search found 7051 results on 283 pages for 'updates'.

Page 224/283 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • VS2010 asking to upgrade SIlverlight??

    - by Lu10ntDan
    I'm running the absolute latest versions of Silverlight and Visual Studio 2010 Professional and built a solution that contained a WPF project. From there, I added a SketchFlow project (based on Blend 4 RC) and I can run each project within the solution just fine by setting whenever I switch between them as startup projects. From there, I added a Silverlight 4 Business Application (taking all the defaults), and when simply trying to set that as the startup project and running it, VS2010 is giving me the following error after trying to open a web page: Line: 56 (in file TestPage.aspx) Error: Unhandled Error in Silverlight Application Code: 8001 Category: InitializeError Message: Upgrade required If I choose not to debug, I get the Silverlight page saying "This page requires a more recent version of Silverlight"! Clicking "Install Now" on the popup window brings me to Mirosoft's Silverlight page where I see: "The version of Silverlight originally requested is not available. You can get a supported version from this page. This Web browser or operating system may not be compatible with Silverlight. Please review the system requirements and, if you wish to proceed, choose the link for your operating system." If I choose to upgrade anyway, I'm told that I'm running the latest version of Silverlight available. What the heck? I'm running the final versions of VS2010 Pro, Silverlight 4, and the latest version of Expression Blend 4 (RC). Why can't VS2010 run this default Silverlight Business App? Any ideas? Please?? Thanks, Lu10ntDn PS. This is on Windows 7 with UAC turned off, and ALL latest Windows Updates installed.

    Read the article

  • XAML Binding to complex value objects

    - by Gus
    I have a complex value object class that has 1) a number or read-only properties; 2) a private constructor; and 3) a number of static singleton instance properties [so the properties of a ComplexValueObject never change and an individual value is instantiated once in the application's lifecycle]. public class ComplexValueClass { /* A number of read only properties */ private readonly string _propertyOne; public string PropertyOne { get { return _propertyOne; } } private readonly string _propertyTwo; public string PropertyTwo { get { return _propertyTwo; } } /* a private constructor */ private ComplexValueClass(string propertyOne, string propertyTwo) { _propertyOne = propertyOne; _propertyTwo = PropertyTwo; } /* a number of singleton instances */ private static ComplexValueClass _complexValueObjectOne; public static ComplexValueClass ComplexValueObjectOne { get { if (_complexValueObjectOne == null) { _complexValueObjectOne = new ComplexValueClass("string one", "string two"); } return _complexValueObjectOne; } } private static ComplexValueClass _complexValueObjectTwo; public static ComplexValueClass ComplexValueObjectTwo { get { if (_complexValueObjectTwo == null) { _complexValueObjectTwo = new ComplexValueClass("string three", "string four"); } return _complexValueObjectTwo; } } } I have a data context class that looks something like this: public class DataContextClass : INotifyPropertyChanged { private ComplexValueClass _complexValueClass; public ComplexValueClass ComplexValueObject { get { return _complexValueClass; } set { _complexValueClass = value; PropertyChanged(this, new PropertyChangedEventArgs("ComplexValueObject")); } } } I would like to write a XAML binding statement to a property on my complex value object that updates the UI whenever the entire complex value object changes. What is the best and/or most concise way of doing this? I have something like: <Object Value="{Binding ComplexValueObject.PropertyOne}" /> but the UI does not update when ComplexValueObject as a whole changes.

    Read the article

  • PostgreSQL: Rolling back a transaction within a plpgsql function?

    - by jamieb
    Coming from the MS SQL world, I tend to make heavy use of stored procedures. I'm currently writing an application uses a lot of PostgreSQL plpgsql functions. What I'd like to do is rollback all INSERTS/UPDATES contained within a particular function if I get an exception at any point within it. I was originally under the impression that each function is wrapped in it's own transaction and that an exception would automatically rollback everything. However, that doesn't seem to be the case. I'm wondering if I ought to be using savepoints in combination with exception handling instead? But I don't really understand the difference between a transaction and a savepoint to know if this is the best approach. Any advice please? CREATE OR REPLACE FUNCTION do_something( _an_input_var int ) RETURNS bool AS $$ DECLARE _a_variable int; BEGIN INSERT INTO tableA (col1, col2, col3) VALUES (0, 1, 2); INSERT INTO tableB (col1, col2, col3); VALUES (0, 1, 'whoops! not an integer'); -- The exception will cause the function to bomb, but the values -- inserted into "tableA" are not rolled back. RETURN True; END; $$ LANGUAGE plpgsql;

    Read the article

  • MS-Access: What could cause one form with a join query to load right and another not?

    - by Daniel Straight
    Form1 Form1 is bound to Table1. Table1 has an ID field. Form2 Form2 is bound to Table2 joined to Table1 on Table2.Table1_ID=Table1.ID Here is the SQL (generated by Access): SELECT Table2.*, Table1.[FirstFieldINeed], Table1.[SecondFieldINeed], Table1.[ThirdFieldINeed] FROM Table1 INNER JOIN Table2 ON Table1.ID = Table2.[Table1_ID]; Form2 is opened with this code in Form1: DoCmd.RunCommand acCmdSaveRecord DoCmd.OpenForm "Form2", , , , acFormAdd, , Me.[ID] DoCmd.Close acForm, "Form1", acSaveYes And when loaded runs: Me.[Table1_ID] = Me.OpenArgs When Form2 is loaded, fields bound to columns from Table1 show up correctly. Form3 Form3 is bound to Table3 joined to Table2 on Table3.Table2_ID=Table2.ID Here is the SQL (generated by Access): SELECT Table3.*, Table2.[FirstFieldINeed], Table2.[SecondFieldINeed] FROM Table2 INNER JOIN Table3 ON Table2.ID = Table3.[Table2_ID]; Form3 is opened with this code in Form2: DoCmd.RunCommand acCmdSaveRecord DoCmd.OpenForm "Form3", , , , acFormAdd, , Me.[ID] DoCmd.Close acForm, "Form2", acSaveYes And when loaded runs: Me.[Table2_ID] = Me.OpenArgs When Form3 is loaded, fields bound to columns from Table2 do not show up correctly. WHY? UPDATES I tried making the join query into a separate query and using that as my record source, but it made no difference at all. If I go to the query for Form3 and view it in datasheet view, I can see that the information that should be pulled into the form is there. It just isn't showing up on the form.

    Read the article

  • How to effectively use WorkbookBeforeClose event correctly?

    - by Ahmad
    On a daily basis, a person needs to check that specific workbooks have been correctly updated with Bloomberg and Reuters market data ie. all data has pulled through and that the 'numbers look correct'. In the past, people were not checking the 'numbers' which led to inaccurate uploads to other systems etc. The idea is that 'something' needs to be developed to prevent the use from closing/saving the workbook unless he/she has checked that the updates are correct/accurate. The numbers look correct action is purely an intuitive exercise, thus will not be coded in any way. The simple solution was to prompt users prior to closing the specific workbook to verify that the data has been checked. Using VSTO SE for Excel 2007, an Add-in was created which hooks into the WorkbookBeforeClose event which is initialised in the add-in ThisAddIn_Startup private void wb_BeforeClose(Xl.Workbook wb, ref bool cancel) { //.... snip ... if (list.Contains(wb.Name)) { DailogResult result = MessageBox.Show("some message", "sometitle", MessageBoxButtons.YesNo); if (result != DialogResult.Yes) { cancel = true; // i think this prevents the whole application from closing } } } I have found the following ThisApplication.WorkbookBeforeSave vs ThisWorkbook.Application.WorkbookBeforeSave which recommends that one should use the ThisApplication.WorkbookBeforeClose event which I think is what I am doing since will span all files opened. The issue I have with the approach is that assuming that I have several files open, some of which are in my list, the event prevents Excel from closing all files sequentially. It now requires each file to be closed individually. Am I using the event correctly and is this effective & efficient use of the event? Should I use the Application level event or document level event? Is there a way to prevent the above behaviour? Any other suggestions are welcomed VS 2005 with VSTO SE

    Read the article

  • Identifying and Resolving Oracle ITL Deadlock

    - by Allan
    I have an Oracle DB package that is routinely causing what I believe is an ITL (Interested Transaction List) deadlock. The relevant portion of a trace file is below. Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TM-0000cb52-00000000 22 131 S 23 143 SS TM-0000ceec-00000000 23 143 SX 32 138 SX SSX TM-0000cb52-00000000 30 138 SX 22 131 S session 131: DID 0001-0016-00000D1C session 143: DID 0001-0017-000055D5 session 143: DID 0001-0017-000055D5 session 138: DID 0001-001E-000067A0 session 138: DID 0001-001E-000067A0 session 131: DID 0001-0016-00000D1C Rows waited on: Session 143: no row Session 138: no row Session 131: no row There are no bit-map indexes on this table, so that's not the cause. As far as I can tell, the lack of "Rows waited on" plus the "S" in the Waiter waits column likely indicates that this is an ITL deadlock. Also, the table is written to quite often (roughly 8 insert or updates concurrently, as often as 240 times a minute), so and ITL deadlock seems like a strong possibility. I've increased the INITRANS parameter of the table and it's indexes to 100 and increased the PCT_FREE on the table from 10 to 20 (then rebuilt the indexes), but the deadlocks are still occurring. The deadlock seems to happen most often during an update, but that could just be a coincidence, as I've only traced it a couple of times. My questions are two-fold: 1) Is this actually an ITL deadlock? 2) If it is an ITL deadlock, what else can be done to avoid it?

    Read the article

  • Sending custom PyQt signals?

    - by Enfors
    I'm practicing PyQt and (Q)threads by making a simple Twitter client. I have two Qthreads. Main/GUI thread. Twitter fetch thread - fetches data from Twitter every X minutes. So, every X minutes my Twitter thread downloads a new set of status updates (a Python list). I want to hand this list over to the Main/GUI thread, so that it can update the window with these statuses. I'm assuming that I should be using the signal / slot system to transfer the "statuses" Python list from the Twitter thread, to the Main/GUI thread. So, my question is twofold: How do I send the statuses from the Twitter thread? How do I receive them in the Main/GUI thread? As far as I can tell, PyQt can by default only send PyQt-objects via signals / slots. I think I'm supposed to somehow register a custom signal which I can then send, but the documentation on this that I've found is very unclear to a newbie like me. I have a PyQt book on order, but it won't arrive in another week, and I don't want to wait until then. :-) I'm using PyQt 4.6-1 on Ubuntu Update: This is an excert from the code that doesn't work. First, I try to "connect" the signal ("newStatuses", a name I just made up) to the function self.update_tweet_list in the Main/GUI thread: QtCore.QObject.connect(self.twit_in, QtCore.SIGNAL("newStatuses (statuses)"), self.update_tweet_list) Then, in the Twitter thread, I do this: self.emit(SIGNAL("newStatuses (statuses)"), statuses) When this line is called, I get the following message: QObject::connect: Cannot queue arguments of type 'statuses' (Make sure 'statuses' is registered using qRegisterMetaType().) I did a search for qRegisterMetaType() but I didn't find anything relating to Python that I could understand.

    Read the article

  • FireFox Toolbar Prefwindow unload/acceptdialog Event to Update the toolbar

    - by Mark
    Hi all, I'm trying to develop a firefox toolbar ;) so my structure is In the options.xul is an PrefWindow which i'm opening over an <toolbarbutton oncommand="esbTb_OpenPreferences()"/> function esbTb_OpenPreferences() { window.openDialog("chrome://Toolbar/content/options.xul", "einstellungen", "chrome,titlebar,toolbar,centerscreen,modal", this);} so in my preferences i can set some checkboxes which indicates what links are presented in my toolbar. So when the preferences window is Closed or the "Ok" button is hitted I want to raise an event or an function which updates via DOM my toolbar. So this is the function which is called when the toolbar is loaded. It sets the links visibility of the toolbar. function esbTB_LoadMenue() { var MenuItemNews = document.getElementById("esbTb_rss_reader"); var MenuItemEservice = document.getElementById("esbTb_estv"); if (!(prefManager.getBoolPref("extensions.esbtoolbar.ShowNews"))) { MenuItemNews.style.display = 'none'; } if (!(prefManager.getBoolPref("extensions.esbtoolbar.ShowEservice"))) { MenuItemEservice.style.display = 'none'; } } So I tried some thinks like adding an eventlistener to the dialog which doesn't work... in the way I tried... And i also tried to hand over the window object from the root window( the toolbar) as an argument of the opendialog function changed the function to this. function esbTB_LoadMenue(RootWindow) { var MenuItemNews = RootWindow.getElementById("esbTb_rss_reader"); var MenuItemEservice = RootWindow.getElementById("esbTb_estv");} And then tried to Access the elements over the handover object, but this also not changed my toolbar at runtime. So what i'm trying to do is to change the visibile links in my toolbar during the runtime and I don't get it how I should do that... thanks in advance

    Read the article

  • PyGTK, Glade, Changing the window view and threads

    - by Gaunt Face
    Heya Everyone, Forgive me if this seems like a stupid question, just so far no where on the internet can I find someone offering a solution to this and I just wanted to get some feedback from someone with more experience than myself (I've only been using python, pyGTK and Glade for 2 days now). I have a UI window displaying and it updates with messages from a thread that is handling a bluetooth connection. This is fine and I have the application closing and running quite reliably, the problem is, after a bluetooth connection is made I wish to maintain the bluetooth thread (i.e. keep the connection going) but completely change the UI of the main window. Now the impression I am getting from pyGTK applications made from glade, is that the easiest thing to do is just open a new window. Is this really the best option? Can I cut the tree of widgets off at the root, maintaining the window widget but add on a new set of widgets from a separate glade file? If opening a new window is the best option, am I right in assuming that the bluetooth thread can be kept alive during this transition, providing I update any callbacks? Any help or pointers would be great. Cheers, Matt

    Read the article

  • Asynchronous daemon processing / ORM interaction with Django

    - by perrierism
    I'm looking for a way to do asynchronous data processing with a daemon that uses Django ORM. However, the ORM isn't thread-safe; it's not thread-safe to try to retrieve / modify django objects from within threads. So I'm wondering what the correct way to achieve asynchrony is? Basically what I need to accomplish is taking a list of users in the db, querying a third party api and then making updates to user-profile rows for those users. As a daemon or background process. Doing this in series per user is easy, but it takes too long to be at all scalable. If the daemon is retrieving and updating the users through the ORM, how do I achieve processing 10-20 users at a time? I would use a standard threading / queue system for this but you can't thread interactions like models.User.objects.get(id=foo) ... Django itself is an asynchronous processing system which makes asynchronous ORM calls(?) for each request, so there should be a way to do it? I haven't found anything in the documentation so far. Cheers

    Read the article

  • jQuery Ajax / .each callback, next 'each' firing before ajax completed

    - by StuR
    Hi the below Javascript is called when I submit a form. It first splits a bunch of url's from a text area, it then: 1) Adds lines to a table for each url, and in the last column (the 'status' column) it says "Not Started". 2) Again it loops through each url, first off it makes an ajax call to check on the status (status.php) which will return a percentage from 0 - 100. 3) In the same loop it kicks off the actual process via ajax (process.php), when the process has completed (bearing in the mind the continuous status updates), it will then say "Completed" in the status column and exit the auto_refresh. 4) It should then go to the next 'each' and do the same for the next url. function formSubmit(){ var lines = $('#urls').val().split('\n'); $.each(lines, function(key, value) { $('#dlTable tr:last').after('<tr><td>'+value+'</td><td>Not Started</td></tr>'); }); $.each(lines, function(key, value) { var auto_refresh = setInterval( function () { $.ajax({ url: 'status.php', success: function(data) { $('#dlTable').find("tr").eq(key+1).children().last().replaceWith("<td>"+data+"</td>"); } }); }, 1000); $.ajax({ url: 'process.php?id='+value, success: function(msg) { clearInterval(auto_refresh); $('#dlTable').find("tr").eq(key+1).children().last().replaceWith("<td>completed rip</td>"); } }); }); }

    Read the article

  • SSIS Lookup with Lookup Component Vs Script Component.

    - by Nev_Rahd
    Hello, I need to load Dimensions from EDW Tables (which does maintain historical records) and is of type Key-Value-Parameter. My scenario is ok if got a record in EDW as below Key1 Key2 Code Value EffectiveDate EndDate CurrentFlag 100 555 01 AAA 2010-01-01 11.00.00 9999-12-31 Y 100 555 02 BBB 2010-01-01 11.00.00 9999-12-31 Y This need to be loaded into DM by pivoting it as key1 and key2 combinations makes Natural key for DM SK NK 01 02 EffectiveDate EndDate CurrentFlag 1 100-555 AAA BBB 2010-01-01 11.00.00 9999-12-31 Y My ssis package does this all good pivoting... looking up the incoming NK in DIM.. if new will insert .. else with further lookup with effective date and determine if the incoming for same natural key got any new (change) in attribute.. if so updates the current record byy setting its end date and insert the new one with new attribute value and pulling the recent records values for other attributes. My problem is if the same natural key comes twice with same attribute in single extract my first lookup which on natural key .. will let both records pass and try to insert.. where its fails. If i get distinct records on NK the second is not picked and need to run package again. So my question how can i configure lookup or alernative way to handle this scenario when same NK comes twice in single extract, would be able to insert first record if not exists in Dim table and for second one should be able to updated with the changes with reference to one inserted above. Not sure this makes sense what am trying to explain. Will attached the screenshot once back to work desk (on monday). Thanks

    Read the article

  • Problem with libcurl cookie engine

    - by Seb Rose
    [Cross-posted from lib-curl mailing list] I have a single threaded app (MSVC C++ 2005) build against a static LIBCURL 7.19.4 A test application connects to an in house server & performs a bespoke authentication process that includes posting a couple of forms, and when this succeeds creates a new resource (POST) and then updates the resource (PUT) using If-Match. I only use a single connection to libcurl (i.e. only one CURL*) The cookie engine is enabled from the start using curl_easy_setopt(CURLOPT_COOKIEFILE, "") The cookie cache is cleared at the end of the authentication process using curl_easy_setopt(CURLOPT_COOKIELIST, "SESS"). This is required by the authentication process. The next call, which completes a successful authentication, results in a couple of security cookies being returned from the server - they have no expiry date set. The server (and I) expect the security cookies to then be sent with all subsequent requests to the server. The problem is that sometimes they are sent and sometimes they aren't. I'm not a CURL expert, so I'm probably doing something wrong, but I can't figure out what. Running the test app in a loop results shows a random distribution of correct cookie handling. As a workaround I've disabled the cookie engine and am doing basic manual cookie handling. Like this it works as expected, but I'd prefer to use the library if possible. Does anyone have any ideas? Thanks Seb

    Read the article

  • Rails: Ajax: Changes Onload

    - by Jay Godse
    Hi. I have a web layout of a for that looks something like this <html> <head> </head> <body> <div id="posts"> <div id="post1" class="post" > <!--stuff 1--> </div> <div id="post2" class="post" > <!--stuff 1--> </div> <!-- 96 more posts --> <div id="post99" class="post" > <!--stuff 1--> </div> </div> </body> </html> I would like to be able to load and render the page with a blank , and then have a function called when the page is loaded which goes in and load up the posts and updates the page dynamically. In Rails, I tried using "link_to_remote" to update with all of the elements. I also tweaked the posts controller to render the collection of posts if it was an ajax request (request.xhr?). It worked fine and very fast. However the update of the blank div is triggered by clicking the link. I would like the same action to happen when the page is loaded so that I don't have to put in a link. Is there a Rails Ajax helper or RJS function or something in Rails that I could use to trigger the loading of the "posts" after the page has loaded and rendered (without the posts)? (If putsch comes to shove, I will just copy the generated JS code from the link_to_remote call and have it called from the onload handler on the body).

    Read the article

  • LinqToSQL Double Insert issue

    - by Vaccano
    I have a WCF service with an object structure similar to this: public class MyClass { public List<MySubItem> SubItems { get; set; } } public class MySubItem { public List<MySubSubItem> SubSubItems { get; set; } } public class MySubSubItem { public string DataValue { get; set; } } public class MyClassDAL { public void InsertMyClass(MyClass myClass) { ctx.MyClasses.InsertOnSubmit(myClass); ctx.SubmitChanges(); } } Sometimes my client will call in with a MyClass that submits only half of the values that it has in the list SubSubItems. Later it calls the insert with the rest of the list. The problem is that when it does this I get a Primary Key violation. The reason is that it is trying to insert the MySubItem again (because there are more items in the SubSubItems owned by the same MySubItem object). How do I deal with this? Do I just call an Update? Do I have to try to separate them out (updates from inserts)? SQL Server 2008 has a really cool Merge functionally. Is there some way to access that from LinqToSQL?

    Read the article

  • Should I be relying on WebTests for data validation?

    - by Alexander Kahoun
    I have a suite of web tests created for a web service. I use it for testing a particular input method that updates a SQL Database. The web service doesn't have a way to retrieve the data, that's not its purpose, only to update it. I have a validator that validates the response XML that the web service generates for each request. All that works fine. It was suggested by a teammate that I add data validation so that I check the database to see the data after the initial response validator runs and compare it with what was in the input request. We have a number of services and libraries that are separate from the web service I'm testing that I can use to get the data and compare it. The problem is that when I run the web test the data validation always fails even when the request succeeds. I've tried putting the thread to sleep between the response validation and the data validation but to no avail; It always gets the data from before the response validation. I can set a break point and visually see that the data has been updated in the DB, funny thing is when I step through it in debug with the breakpoint it does validate successfully. Before I get too much more into this issue I have to ask; Is this the purpose of web tests? Should I be able to validate data through service calls in this manner or am I asking too much of a web test and the response validation is as far as I should go?

    Read the article

  • SQL (mySQL) update some value in all records processed by a select

    - by jdmuys
    I am using mySQL from their C API, but that shouldn't be relevant. My code must process records from a table that match some criteria, and then update the said records to flag them as processed. The lines in the table are modified/inserted/deleted by another process I don't control. I am afraid in the following, the UPDATE might flag some records erroneously since the set of records matching might have changed between step 1 and step 3. SELECT * FROM myTable WHERE <CONDITION>; # step 1 <iterate over the selected set of lines. This may take some time.> # step 2 UPDATE myTable SET processed=1 WHERE <CONDITION> # step 3 What's the smart way to ensure that the UPDATE updates all the lines processed, and only them? A transaction doesn't seem to fit the bill as it doesn't provide isolation of that sort: a recently modified record not in the originally selected set might still be targeted by the UPDATE statement. For the same reason, SELECT ... FOR UPDATE doesn't seem to help, though it sounds promising :-) The only way I can see is to use a temporary table to memorize the set of rows to be processed, doing something like: CREATE TEMPORARY TABLE workOrder (jobId INT(11)); INSERT INTO workOrder SELECT myID as jobId FROM myTable WHERE <CONDITION>; SELECT * FROM myTable WHERE myID IN (SELECT * FROM workOrder); <iterate over the selected set of lines. This may take some time.> UPDATE myTable SET processed=1 WHERE myID IN (SELECT * FROM workOrder); DROP TABLE workOrder; But this seems wasteful and not very efficient. Is there anything smarter? Many thanks from a SQL newbie.

    Read the article

  • SmtpClient.SendAsync - How to ensure my application doesn't finish before callback?

    - by James
    Hi, I need to send emails asychronously through a console application. I need to do some DB updates on the callback but my application is exiting before the callback code gets run! How can I stop this from happening in a nice manner rather than simply guessing how long to wait before exiting. I would imagine the Async calls get placed in some form of thread? Is it possible to check if any are waiting to be called? Sample Code private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. String token = (string) e.UserState; if (e.Cancelled) { Console.WriteLine("[{0}] Send canceled.", token); } if (e.Error != null) { Console.WriteLine("[{0}] {1}", token, e.Error.ToString()); } else { // update DB Console.WriteLine("Message sent."); } } public static void Main(string[] args) { var users = Repository.GetUsers(); SmtpClient client = new SmtpClient("Host"); client.SendCompleted += new SendCompletedEventHandler(SendCompletedCallback); MailAddress from = new MailAddress("[email protected]", "System", Encoding.UTF8); foreach (var user in users) { MailAddress to = new MailAddress(user.Email); MailMessage message = new MailMessage(from, to); message.Body = "This is a test"; message.BodyEncoding = System.Text.Encoding.UTF8; message.Subject = "test message 1" + someArrows; message.SubjectEncoding = System.Text.Encoding.UTF8; string userState = String.Format("Message for user id {0}", user.ID); client.SendAsync(message, userState); message.Dispose(); } // need to wait here until I have received a callback for each message // otherwise the application will exit }

    Read the article

  • PLKs and Web Service Software Factory

    - by Nix
    We found a bug in Web Service Software Factory a description can be found here. There has been no updates on it so we decided to download the code and fix it ourself. Very simple bug and we patched it with maybe 3 lines of code. However* we have now tried to repackage it and use it and are finding that this is seemingly an impossible process. Can someone please explain to me the process of PLKs? I have read all about them but still don't understand what is really required to distribute a VS package. I was able to get it to load and run using a PLK obtained from here, but i am assuming that you have to be a partner to get a functional PLK that will be recognized on other peoples systems? Every time i try and install this on a different computer I get a "Package Load Failure". Is the reason I am getting errors because I am not using a partner key? Is there any other way around this? For instance is there any way we can have an "internal" VS package that we can distribute? Edit Files I had to change to get it to work. First run devenv PostInstall.proj Generate your plks and replace ##Package PLK## (.resx files) --Just note that package version is not the class name but is "Web Service Software Factory: Modeling Edition" -- And you need to remove the new lines from the key ProductDefinitionRegistryFragment.wxi line 1252(update version to whatever version you used in plk) Uncomment all // [VSShell::ProvideLoadKey("Standard", Constant in .tt files.

    Read the article

  • JVM/CLR Source-compatible Language Options

    - by Nathan Voxland
    I have an open source Java database migration tool (http://www.liquibase.org) which I am considering porting to .Net. The majority of the tool (at least from a complexity side) is around logic like "if you are adding a primary key and the database is Oracle use this SQL. If database is MySQL use this SQL. If the primary key is named and the database is Postgres use this SQL". I could fork the Java codebase and covert it (manually and/or automatically), but as updates and bug fixes to the above logic come in I do not want to have to apply it to both versions. What I would like to do is move all that logic into a form that can be compiled and used by both Java and .Net versions naively. The code I am looking to convert does not contain any advanced library usage (JDBC, System.out, etc) that would vary significantly from Java to .Net, so I don't think that will be an issue (at worst it can be designed around). So what I am looking for is: A language in which I can code common parts of my app in and compile it into classes usable by the "standard" languages on the target platform Does not add any runtime requirements to the system Nothing so strange that it scares away potential contributors I know Python and Ruby both have implementations on for the JVM and CLR. How well do they fit my requirements? Has anyone been successful (or unsuccesful) using this technique for cross-platform applications? Are there any gotcha's I need to worry about?

    Read the article

  • PowerShell: How to find and uninstall a MS Office Update

    - by Hank
    I've been hunting for a clean way to uninstall an MSOffice security update on a large number of workstations. I've found some awkward solutions, but nothing as clean or general like using PowerShell and get-wmiobject with Win32_QuickFixEngineering and the .Uninstall method on the resulting object. [Apparently, Win32_QuickFixEngineering only refers to Windows patches. See: http://social.technet.microsoft.com/Forums/en/winserverpowershell/thread/93cc0731-5a99-4698-b1d4-8476b3140aa3 ] Question 1: Is there no way to use get-wmiobject to find MSOffice updates? There are so many classes and namespaces, I have to wonder. This particualar Office update (KB978382) can be found in the registry here (for Office Ultimate): HKLM\Software\Microsoft\Windows\CurrentVersion\Uninstall\{91120000-002E-0000-0000-0000000FF1CE}_ULTIMATER_{6DE3DABF-0203-426B-B330-7287D1003E86} which kindly shows the uninstall command of: msiexec /package {91120000-002E-0000-0000-0000000FF1CE} /uninstall {6DE3DABF-0203-426B-B330-7287D1003E86} and the last GUID seems constant between different versions of Office. I've also found the update like this: $wu = new-object -com "Microsoft.Update.Searcher" $wu.QueryHistory(0,$wu.GetTotalHistoryCount()) | where {$_.Title -match "KB978382"} I like this search because it doesn't require any poking around in the registry, but: Question 2: If I've found it like this, what can I do with the found information to facilitate the Uninstall? Thanks

    Read the article

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • Reloading a JTree during runtime

    - by Patrick Kiernan
    I create a JTree and model for it out in a class separate to the GUI class. The data for the JTree is extracted from a file. Now in the GUI class the user can add files from the file system to an AWT list. After the user clicks on a file in the list I want the JTree to update. The variable name for the JTree is schemaTree. I have the following code for the when an item in the list is selected: private void schemaListItemStateChanged(java.awt.event.ItemEvent evt) { int selection = schemaList.getSelectedIndex(); File selectedFile = schemas.get(selection); long fileSize = selectedFile.length(); fileInfoLabel.setText("Size: " + fileSize + " bytes"); schemaParser = new XSDParser(selectedFile.getAbsolutePath()); TreeModel model = schemaParser.generateTreeModel(); schemaTree.setModel(model); } I've updated the code to correspond to the accepted answer. The JTree now updates correctly based on which file I select in the list.

    Read the article

  • Passing viewmodel to actionresult creates new viewmodel

    - by Jonas Bohez
    I am using a viewmodel, which i then when to send to an actionresult to use (the modified viewmodel) But in the controller, i lose the list and objects in my viewmodel. This is my view: @using PigeonFancier.Models @model PigeonFancier.Models.InschrijvingModel @using (Html.BeginForm("UpdateInschrijvingen","Melker",Model)) { <div> <fieldset> <table> @foreach (var item in Model.inschrijvingLijst) { <tr> <td>@Html.DisplayFor(model => item.Duif.Naam)</td> <td> @Html.CheckBoxFor(model => item.isGeselecteerd)</td> </tr> } </table> <input type="submit" value="Wijzigen"/> </fieldset> </div> } This is my controller, which does nothing at the moment until i can get the full viewmodel back from the view. public ActionResult UpdateInschrijvingen(InschrijvingModel inschrijvingsModel) { // inschrijvingsModel is not null, but it creates a new model before it comes here with //Use the model for some updates return RedirectToAction("Inschrijven", new { vluchtId = inschrijvingsModel.vlucht.VluchtId }); } This is the model with the List and some other objects who become null because it creates a new model when it comes back from the view to the actionresult public class InschrijvingModel { public Vlucht vlucht; public Duivenmelker duivenmelker; public List<CheckBoxModel> inschrijvingLijst { get; set; } public InschrijvingModel() { // Without this i get, No parameterless constructor defined exception. // So it uses this when it comes back from the view to make a new model } public InschrijvingModel(Duivenmelker m, Vlucht vl) { inschrijvingLijst = new List<CheckBoxModel>(); vlucht = vl; duivenmelker = m; foreach (var i in m.Duiven) { inschrijvingLijst.Add(new CheckBoxModel(){Duif = i, isGeselecteerd = i.IsIngeschrevenOpVlucht(vl)}); } } What is going wrong and how should i fix this problem please? Thanks

    Read the article

  • Mode specific key bindings

    - by rejeep
    Hey, I have a minor mode that also comes with a global mode. The mode have some key bindings and I want the user to have the posibility to specify what bindings should work for each mode. (my-minor-mode-bindings-for-mode 'some-mode '(key1 key2 ...)) (my-minor-mode-bindings-for-mode 'some-other-mode '(key3 key4 ...)) So I need some kind of mode/buffer-local key map. Buffer local is a bit problematic since the user can change the major mode. I have tried some solutions of which neither works any good. Bind all possible keys always and when the user types the key, check if the key should be active in that mode. Execute action if true, otherwise fall back. Like the previous case only that no keys are bound. Instead I use a pre command hook and check if the key pressed should do anything. For each buffer update (whatever that means), run a function that first clears the key map and then updates it with the bindings for that particular mode. I have tried these approaches and I found problems with all of them. Do you know of any good way to solve this? Thanks!

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >