Search Results

Search found 13573 results on 543 pages for 'the great widi'.

Page 495/543 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • MVVM load data during or after ViewModel construction?

    - by mkmurray
    My generic question is as the title states, is it best to load data during ViewModel construction or afterward through some Loaded event handling? I'm guessing the answer is after construction via some Loaded event handling, but I'm wondering how that is most cleanly coordinated between ViewModel and View? Here's more details about my situation and the particular problem I'm trying to solve: I am using the MVVM Light framework as well as Unity for DI. I have some nested Views, each bound to a corresponding ViewModel. The ViewModels are bound to each View's root control DataContext via the ViewModelLocator idea that Laurent Bugnion has put into MVVM Light. This allows for finding ViewModels via a static resource and for controlling the lifetime of ViewModels via a Dependency Injection framework, in this case Unity. It also allows for Expression Blend to see everything in regard to ViewModels and how to bind them. So anyway, I've got a parent View that has a ComboBox databound to an ObservableCollection in its ViewModel. The ComboBox's SelectedItem is also bound (two-way) to a property on the ViewModel. When the selection of the ComboBox changes, this is to trigger updates in other views and subviews. Currently I am accomplishing this via the Messaging system that is found in MVVM Light. This is all working great and as expected when you choose different items in the ComboBox. However, the ViewModel is getting its data during construction time via a series of initializing method calls. This seems to only be a problem if I want to control what the initial SelectedItem of the ComboBox is. Using MVVM Light's messaging system, I currently have it set up where the setter of the ViewModel's SelectedItem property is the one broadcasting the update and the other interested ViewModels register for the message in their constructors. It appears I am currently trying to set the SelectedItem via the ViewModel at construction time, which hasn't allowed sub-ViewModels to be constructed and register yet. What would be the cleanest way to coordinate the data load and initial setting of SelectedItem within the ViewModel? I really want to stick with putting as little in the View's code-behind as is reasonable. I think I just need a way for the ViewModel to know when stuff has Loaded and that it can then continue to load the data and finalize the setup phase. Thanks in advance for your responses.

    Read the article

  • Creating Entity Framework objects with Unity for Unit of Work/Repository pattern

    - by TobyEvans
    Hi there, I'm trying to implement the Unit of Work/Repository pattern, as described here: http://blogs.msdn.com/adonet/archive/2009/06/16/using-repository-and-unit-of-work-patterns-with-entity-framework-4-0.aspx This requires each Repository to accept an IUnitOfWork implementation, eg an EF datacontext extended with a partial class to add an IUnitOfWork interface. I'm actually using .net 3.5, not 4.0. My basic Data Access constructor looks like this: public DataAccessLayer(IUnitOfWork unitOfWork, IRealtimeRepository realTimeRepository) { this.unitOfWork = unitOfWork; this.realTimeRepository = realTimeRepository; } So far, so good. What I'm trying to do is add Dependency Injection using the Unity Framework. Getting the EF data context to be created with Unity was an adventure, as it had trouble resolving the constructor - what I did in the end was to create another constructor in my partial class with a new overloaded constructor, and marked that with [InjectionConstructor] [InjectionConstructor] public communergyEntities(string connectionString, string containerName) :this() { (I know I need to pass the connection string to the base object, that can wait until once I've got all the objects initialising correctly) So, using this technique, I can happily resolve my entity framework object as an IUnitOfWork instance thus: using (IUnityContainer container = new UnityContainer()) { container.RegisterType<IUnitOfWork, communergyEntities>(); container.Configure<InjectedMembers>() .ConfigureInjectionFor<communergyEntities>( new InjectionConstructor("a", "b")) DataAccessLayer target = container.Resolve<DataAccessLayer>(); Great. What I need to do now is create the reference to the repository object for the DataAccessLayer - the DAL only needs to know the interface, so I'm guessing that I need to instantiate it as part of the Unity Resolve statement, passing it the appropriate IUnitOfWork interface. In the past, I would have just passed the Repository constructor the db connection string, and it would have gone away, created a local Entity Framework object and used that just for the lifetime of the Repository method. This is different, in that I create an Entity Framework instance as an IUnitOfWork implementation during the Unity Resolve statement, and it's that instance I need to pass into the constructor of the Repository - is that possible, and if so, how? I'm wondering if I could make the Repository a property and mark it as a Dependency, but that still wouldn't solve the problem of how to create the Repository with the IUnitOfWork object that the DAL is being Resolved with I'm not sure if I've understood this pattern correctly, and will happily take advice on the best way to implement it - Entity Framework is staying, but Unity can be swapped out if not the best approach. If I've got the whole thing upside down, please tell me thanks

    Read the article

  • Combining mousedown and mousemove events together with Javascript (JQUERY?)

    - by webzide
    Dear experts, I would like to combine a the mousedown and mousemove and mouseup events together. Basically the application of this would be making a UI for selecting elements. Like selecting icons in windows when the mouse is clicked down and as you move it, a dotted border dynamically moves with it. I know this is possible with the predefined UI of Jquery. But I am building a web application that requires the integration of this and would like to know the technique. I spend hours on this and it just doesn't work. here's is the code i have so far and the logic behind it: $(document).bind('mousedown', function (evt) { evt = (evt) ? evt : event; startX = evt.clientX; startY = evt.clientY; div = document.createElement("div"); div.style.position = "absolute"; div.style.left = startX + "px"; div.style.top = startY + "px"; div.style.border = "1px dotted #DDDDDD"; $(document).bind('mousemove', function(evt){ evt=(evt) ? evt: event; alert("TESTING OF THIS WORKS"); }); }); $(document).bind('mouseup',function(evt) { var evt = (evt) ? evt : event; var endX = evt.clientX; var endY = evt.clientY; difX = (endX - startX); difY = (endY - startY) if ((difX || difY) > 0) { div.style.width = difX + "px"; div.style.height = difY + "px"; document.body.appendChild(div); } $(this).unbind('mousemove'); }); As you can see I have placed an event binding of mousemove into the event function of mousedown so that it can only be invoked when the mouse is down. but the problem is, once that event is binded, it does not come off. The borders does not move dynamically as expected. Maybe my logic is entirely messed up. If anyone could point me the the right direction that would be great. THanks in advance.

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • MSDTC - Communication with the underlying transaction manager has failed (Firewall open, MSDTC netwo

    - by SocialAddict
    I'm having problems with my ASP.NET web forms system. It worked on our test server but now we are putting it live one of the servers is within a DMZ and the SQL server is outside of that (on our network still though - although a different subnet) I have open up the firewall completely between these two boxes to see if that was the issue and it still gives the error message "Communication with the underlying transaction manager has failed" whenever we try and use the "TransactionScope". We can access the data for retrieval it's just transactions that break it. We have also used msdtc ping to test the connection and with the amendments on the firewall that pings successfully, but the same error occurs! How do i resolve this error? Any help would be great as we have a system to go live today. Panic :) Edit: I have created a more straightforward test page with a transaction as below and this works fine. Could a nested transaction cause this kind of error and if so why would this only cause an issue when using a live box in a dmz with a firewall? AuditRepository auditRepository = new AuditRepository(); try { using (TransactionScope scope = new TransactionScope()) { auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#1", 1); auditRepository.Save(); auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#2", 1); auditRepository.Save(); scope.Complete(); } } catch (Exception ex) { Response.Write("Test Error For Transaction: " + ex.Message + "<br />" + ex.StackTrace); } This is the ErrorStack we are getting when the problem occurs: at System.Transactions.TransactionInterop.GetOletxTransactionFromTransmitterPropigationToken(Byte[] propagationToken) at System.Transactions.TransactionStatePSPEOperation.PSPEPromote(InternalTransaction tx) at System.Transactions.TransactionStateDelegatedBase.EnterState(InternalTransaction tx) at System.Transactions.EnlistableStates.Promote(InternalTransaction tx) at System.Transactions.Transaction.Promote() at System.Transactions.TransactionInterop.ConvertToOletxTransaction(Transaction transaction) at System.Transactions.TransactionInterop.GetExportCookie(Transaction transaction, Byte[] whereabouts) at System.Data.SqlClient.SqlInternalConnection.GetTransactionCookie(Transaction transaction, Byte[] whereAbouts) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.Linq.SqlClient.SqlConnectionManager.UseConnection(IConnectionUser user) at System.Data.Linq.SqlClient.SqlProvider.get_IsSqlCe() at System.Data.Linq.SqlClient.SqlProvider.InitializeProviderMode() at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.ChangeDirector.StandardChangeDirector.DynamicInsert(TrackedObject item) at System.Data.Linq.ChangeDirector.StandardChangeDirector.Insert(TrackedObject item) at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges() at RegBook.classes.DbBase.Save() at RegBook.usercontrols.BookingProcess.confirmBookingButton_Click(Object sender, EventArgs e)

    Read the article

  • [android] How to center buttons on screen horizontally and vertically plus equidistant apart?

    - by marc
    I've been racking my brain (android newbie here, so not hard to do) for awhile trying to figure out how to accomplish this: Desired Layout using a RelativeLayout or something other than AbsoluteLayout which is what this was created with. I'm coming from a Windows programming background where the device adjusts the 'absolute' positioning for you and GUI layout was a non-issue. The first layout works great in the emulator, but doesn't format for my Nexus One or any other screen that differs from the emulator size. I expected this because it's absolutely positioned, but haven't found a solution that will format correctly for different screen sizes. My goal is to have the layout work for different screen sizes and in portrait / landscape. Here's the Code that I'm currently using: [main.xml] <?xml version="1.0" encoding="utf-8"?> <AbsoluteLayout android:layout_width="fill_parent" android:layout_height="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android" > <Button android:id="@+id/Button01" android:layout_width="188px" android:layout_height="100px" android:text="A" android:layout_y="50px" android:layout_x="65px" android:textSize="48sp"/> <Button android:id="@+id/Button02" android:layout_width="188px" android:layout_height="100px" android:text="B" android:layout_y="175px" android:layout_x="65px" android:textSize="48sp"/> <Button android:id="@+id/Button03" android:layout_width="188px" android:layout_height="100px" android:text="C" android:layout_y="300px" android:layout_x="65px" android:textSize="48sp"/> </AbsoluteLayout> Using tidbits from other questions here, I came up with this, it’s closer, but not there yet. <?xml version="1.0" encoding="utf-8"?> <TableLayout android:gravity="center" android:id="@+id/widget49" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" xmlns:android="http://schemas.android.com/apk/res/android" > <Button android:id="@+id/Button01" android:layout_width="0dip" android:layout_weight="1" android:text="A" android:textSize="48sp"/> <Button android:id="@+id/Button02" android:layout_width="0dip" android:layout_weight="1" android:text="B" android:textSize="48sp"/> <Button android:id="@+id/Button03" android:layout_width="0dip" android:layout_weight="1" android:text="C" android:textSize="48sp"/> </TableLayout> Here’s a picture of the TableLayout: Another Attempt Any help / guidance would be greatly appreciated.

    Read the article

  • Silverlight and Unexpected Font Sizes

    - by Eric J.
    Someone please teach me to fish here... I'm just learning Silverlight and have ran into a few situations where the font size actually used is drastically different than I would expect. There's probably something conceptual that I'm missing. Case A In one instance, I have defined a user control that presents a Label to show text. If one clicks on the label, the label (that is in a stack panel, in the user control) is replaced with a TextBox. When used at the top of a page (as in the example below with lblName) the label text is very small (around 8 points). When clicked on, the text box that replaces the label uses the specified fonts size. That same user control, used in different parts of the app, uses the same font for Label and TextBox. <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition Height="33" /> <RowDefinition Height="267*" /> </Grid.RowDefinitions> <StackPanel Height="Auto" HorizontalAlignment="Left" Name="stackPanel" VerticalAlignment="Top" Width="Auto" Grid.Row="1" /> <my:EditLabel Height="33" HorizontalAlignment="Left" x:Name="lblName" VerticalAlignment="Top" Width="Auto" FlexText="{Binding Name, Mode=TwoWay}" FontSize="20" MinHeight="24" /> </Grid> Case B I'm using the LiquidMenu.Menu control to pop up a menu when a button is pressed. The font looks huge compared to the rest of my page (maybe 36 points?). I tried forcing it to a very small by explicitly setting it to 8pt, but that had no effect. <Grid x:Name="LayoutRoot" Background="{x:Null}"> <StackPanel x:Name="labelStackPanel" Orientation="Horizontal"> <TextBlock Height="24" HorizontalAlignment="Left" Name="labelText" VerticalAlignment="Top" Width="200" Text="(Value Goes Here)" /> </StackPanel> <liquidMenu:Menu x:Name="popupMenu" Canvas.Left="40" Canvas.Top="40" ItemSelected="MenuList_ItemSelected" Visibility="Collapsed" Height="Auto" FontSize="8"> <liquidMenu:MenuItem ID="delete" Icon="Images/Delete10.png" Text="Delete" Shortcut="Del" /> <liquidMenu:MenuItem ID="exclusive" Icon="" Text="Exclusive" Shortcut="Ctrl+E" /> <liquidMenu:MenuItem ID="properties" Icon="" Text="Properties" Shortcut="Ctrl+P" /> </liquidMenu:Menu> </Grid> Answers to these specific issues are great, a new way to think about this type of issue so that I understand how to control font size is better.

    Read the article

  • Runge-Kutta (RK4) integration for game physics

    - by Kai
    Gaffer on Games has a great article about using RK4 integration for better game physics. The implementation is straightforward but the math behind it confuses me. I understand derivatives and integrals on a conceptual level but I haven't manipulated equations in a long time. Here's the brunt of Gaffer's implementation: void integrate(State &state, float t, float dt) { Derivative a = evaluate(state, t, 0.0f, Derivative()); Derivative b = evaluate(state, t+dt*0.5f, dt*0.5f, a); Derivative c = evaluate(state, t+dt*0.5f, dt*0.5f, b); Derivative d = evaluate(state, t+dt, dt, c); const float dxdt = 1.0f/6.0f * (a.dx + 2.0f*(b.dx + c.dx) + d.dx); const float dvdt = 1.0f/6.0f * (a.dv + 2.0f*(b.dv + c.dv) + d.dv) state.x = state.x + dxdt * dt; state.v = state.v + dvdt * dt; } Can anybody explain in simple terms how RK4 works? Specifically, why are we averaging the derivatives at 0.0f, 0.5f, 0.5f, and 1.0f? How is averaging derivatives up to the 4th order different from doing a simple euler integration with a smaller timestep? After reading the accepted answer below, and several other articles, I have a grasp on how RK4 works. To answer my own questions: Can anybody explain in simple terms how RK4 works? RK4 takes advantage of the fact that we can get a much better approximation of a function if we use its higher-order derivatives rather than just the first or second derivative. That's why the Taylor series converges much faster than Euler approximations. (take a look at the animation on the right side of that page) Specifically, why are we averaging the derivatives at 0.0f, 0.5f, 0.5f, and 1.0f? The Runge-Kutta method is an approximation of a function that samples derivatives of several points within a timestep, unlike the Taylor series which only samples derivatives of a single point. After sampling these derivatives we need to know how to weigh each sample to get the closest approximation possible. An easy way to do this is to pick constants that coincide with the Taylor series, which is how the constants of a Runge-Kutta equation are determined. This article made it clearer for me: http://web.mit.edu/10.001/Web/Course%5FNotes/Differential%5FEquations%5FNotes/node5.html. Notice how (15) is the Taylor series expansion while (17) is the Runge-Kutta derivation. How is averaging derivatives up to the 4th order different from doing a simple euler integration with a smaller timestep? Mathematically it converges much faster than doing many Euler approximations. Of course, with enough Euler approximations we can gain equal accuracy to RK4, but the computational power needed doesn't justify using Euler.

    Read the article

  • How to diagnose cause, fix, or work around Adobe ActiveX / COM related error 0x80004005 progmaticall

    - by Streamline
    I've built a C# .NET app that uses the Adobe ActiveX control to display a PDF. It relies on a couple DLLs that get shipped with the application. These DLLs interact with the locally installed Adobe Acrobat or Adobe Acrobat Reader installed on the machine. This app is being used by some customer already and works great for nearly all users ( I check to see that the local machine is running at least version 9 of either Acrobat or Reader already ). I've found 3 cases where the app returns the error message "Error HRESULT E_FAIL has been returned from a call to a COM component" when trying to load (when the activex control is loading). I've checked one of these user's machines and he has Acrobat 9 installed and is using it frequently with no problems. It does appear that Acrobat 7 and 8 were installed at one time since there are entries for them in the registry along with Acrobat 9. I can't reproduce this problem locally, so I am not sure exactly which direction to go. The error at the top of the stacktrace is: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component. Some research into this error indicates it is a registry problem. Does anyone have a clue as to how to fix or work around this problem, or determine how to get to the core root of the problem? The full content of the error message is this: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component.    at System.Windows.Forms.UnsafeNativeMethods.CoCreateInstance(Guid& clsid, Object punkOuter, Int32 context, Guid& iid)    at System.Windows.Forms.AxHost.CreateWithoutLicense(Guid clsid)    at System.Windows.Forms.AxHost.CreateWithLicense(String license, Guid clsid)    at System.Windows.Forms.AxHost.CreateInstanceCore(Guid clsid)    at System.Windows.Forms.AxHost.CreateInstance()    at System.Windows.Forms.AxHost.GetOcxCreate()    at System.Windows.Forms.AxHost.TransitionUpTo(Int32 state)    at System.Windows.Forms.AxHost.CreateHandle()    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.AxHost.EndInit()    at AcrobatChecker.Viewer.InitializeComponent()    at AcrobatChecker.Viewer..ctor()    at AcrobatChecker.Form1.btnViewer_Click(Object sender, EventArgs e)    at System.Windows.Forms.Control.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)    at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)    at System.Windows.Forms.Control.WndProc(Message& m)    at System.Windows.Forms.ButtonBase.WndProc(Message& m)    at System.Windows.Forms.Button.WndProc(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)    at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • Fluent NHibernate Mapping and Formulas/DatePart

    - by Alessandro Di Lello
    Hi There, i have a very simple table with a Datetime column and i have this mapping in my domain object. MyDate is the name of the datetime column in the DB. public virtual int Day { get; set; } public virtual int Month { get; set; } public virtual int Year { get; set; } public virtual int Hour { get; set; } public virtual int Minutes { get; set; } public virtual int Seconds { get;set; } public virtual int WeekNo { get; set; } Map(x => x.Day).Formula("DATEPART(day, Datetime)"); Map(x => x.Month).Formula("DATEPART(month, Datetime)"); Map(x => x.Year).Formula("DATEPART(year, Datetime)"); Map(x => x.Hour).Formula("DATEPART(hour, Datetime)"); Map(x => x.Minutes).Formula("DATEPART(minute, Datetime)"); Map(x => x.Seconds).Formula("DATEPART(second, Datetime)"); Map(x => x.WeekNo).Formula("DATEPART(week, Datetime)"); This is working all great .... but Week Datepart. I saw with NHProf the sql generating for a select and here's the problem it's generating all the sql correctly but for week datepart.. this is part of the SQL generated: ....Datepart(day, MyDate) ... ....Datepart(month, MyDate) ... ....Datepart(year, MyDate) ... ....Datepart(hour, MyDate) ... ....Datepart(minute, MyDate) ... ....Datepart(second, MyDate) ... ....Datepart(this_.week, MyDate) ... where this_ is the alias for the table that nhibernate uses. so it's treating the week keyword for the datepart stuff as a column or something like that. To clarify there's no column or properties that is called week. some help ? cheers Alessandro

    Read the article

  • Mercurial to Mercurial to Subversion Workflow Problem

    - by Dalroth
    We're migrating from Subversion to Mercurial. To facilitate the migration, we're creating an intermediate Mercurial repository that is a clone of our Subversion repository. All developers will be begin switching over to the Mercurial repository, and we'll periodically push changes from the intermediate Mercurial repository to the existing Subversion repository. After a period of time, we'll simply obsolete the Subversion repository and the intermediate Mercurial repository will become the new system of record. Dev 1 Local --+--> Mercurial --+--> Subversion Dev 2 Local --+ + Dev 3 Local --+ + Dev 4 -------------------------+ I've been testing this out, but I keep running into a problem when I push changes from my local repository, to the intermediate Mercurial repository, and then up into our Subversion repository. On my local machine, I have a changeset that is committed and ready to be pushed to our intermediate Mercurial repository. Here you can see it is revision #2263 with hash 625... I push only this changeset to the remote repository. So far, everything looks good. The changeset has been pushed. hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved I now switch over to the remote repository, and update the working directory. hg push pushing to svn://... searching for changes [r3834] bmurphy: database namespace pulled 1 revisions saving bundle to /srv/hg/repository/.hg/strip-backup/62539f8df3b2-temp adding branch adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files rebase completed Next, I push the change up to Subversion, works great. At this point, the change is in the Subversion repository and I return attention back to my local client. I pull changes to my local machine. Huh? I've now got two changesets. My original changeset appears as a local branch now. The other changeset has a new revision number 2264, and a new hash 10c1... Anyway, I update my local repo to the new revision. I'm now switched over. So, I finally click the "determine and mark outgoing changesets" and as you can see Mercurial still wants to push out my previous changesets even though they've already been pushed. Clearly, I'm doing something wrong. I also can't merge the two revisions. If I merge the two revisions on my local machine, I end up with a "merge" commit. When I push that merge commit out to the intermediate Mercurial repository, I can no longer push changes out to our Subversion repository. I end up with the following problem: hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved hg push pushing to svn://... searching for changes abort: Sorry, can't find svn parent of a merge revision. and I have to rollback the merge to get back to a working state. What am I missing?

    Read the article

  • Need additional help with binding multiple CommandParameters using MultiBinding

    - by Dave
    I need to have a command handler for a ToggleButton that can take multiple parameters, namely the IsChecked property of said ToggleButton, along with a constant value, which could be a string, byte, int... doesn't matter. I found this great question on SO and followed the answer's link and read up on MultiBinding and IMultiValueConverter. It went really smoothly until I had to write the MultiBinding, when I realized that I also need to pass a constant value and couldn't do something like <Binding Value="1" /> I then came across another similar question that Kent Boogaart answered, and then I started to think about ways that I could get around this. One possible way is to not use MVVM and simply add the Tag property to my ToggleButton, in which case my MultiBinding would look like this: <MultiBinding Converter="{StaticResource MyConverter}"> <MultiBinding.Bindings> <Binding Path="IsChecked" /> <Binding Path="Tag" /> </MultiBinding.Bindings> </MultiBinding> Kent had made a comment along the lines of, "if you're using MVVM you should be able to get around this issue". However, I'm not sure that's an option for me, even though I have adopted MVVM as my WPF pattern of necessity choice. The reason why I say this is that I have wayyyy more than one ToggleButton in the UserControl, and each of the ToggleButtons' Commands need to call the same function. But since they are ToggleButtons, I can't use the property bound to IsChecked in the ViewModel, because I don't know which one was last clicked. I suppose I could add another private property to keep track of this, but it seems a little silly. As far as the constant goes, I could probably get rid of this if I did the tracking idea, but not sure of any other way to get around it. Does anyone have good suggestions for me here? :) EDIT -- ok, so I need to update my bindings, which still don't work quite right: <ToggleButton Tag="1" Command="{Binding MyCommand}" Style="{StaticResource PassFailToggleButtonStyle}" HorizontalContentAlignment="Center" Background="Transparent" BorderBrush="Transparent" BorderThickness="0"> <ToggleButton.CommandParameter> <MultiBinding Converter="{StaticResource MyConverter}"> <MultiBinding.Bindings> <Binding Path="IsChecked" RelativeSource="{RelativeSource Mode=Self}" /> <Binding Path="Tag" RelativeSource="{RelativeSource Mode=Self}" /> </MultiBinding.Bindings> </MultiBinding> </ToggleButton.CommandParameter> </ToggleButton> IsChecked was working, but not Tag. I just realized that Tag is a string... duh. It's working now! The key was to use a RelativeSource of Self.

    Read the article

  • How do I work around an HTML Table rendering bug in IE 7?

    - by osmaniac
    I have a table. Some cells span multiple columns. Some cells span multiple rows and columns. But one row (which spans all columns but the rightmost one) creates an artifact. Part of the text in the cell is erroneously repeated left justified on a new row just below the table. I'm baffled. I tried rendering with and without "table-layout: fixed;". Same result. When I originally composed the design using just HTML and CSS, it looked great. But then I worked it into a page and had to add more columns to my master table the right to hold buttons. These buttons are in three groups, each having their own div to control floating and rewrapping when the window gets narrower. One div has another table inside it that groups a single row of buttons. Thus I have table inside div inside td inside outer table. I would prefer a simpler design, but how? This is what I want to have: ................................................................................... . . . . Four rows of data . Three groups of buttons that can reflow . . With several columns . if window gets narrower . . meticulously layed out, . . . That should not resize . . . when window gets narrower . . ................................................................................... . One more row of data spanning the whole screen which stays below the buttons . ................................................................................... What I was doing was putting the three divs with the buttons in the upper right in a single cell that spanned four rows. What other opportunities does CSS offer? The buttons are not allowed to overlap the data on the left or go past the data line below. The original design had the divs with the buttons NOT in a table with the data, but when the window gets narrow, some of the buttons flow such that they go underneath the data, which looks bad. I would post actual HTML, except it is generated by ASP, huge, with lots of CSS styling, and the feature that lets me view the final HTML is not working at the moment. (Built in security in the application.)

    Read the article

  • Rx Reactive extensions: Unit testing with FromAsyncPattern

    - by Andrew Anderson
    The Reactive Extensions have a sexy little hook to simplify calling async methods: var func = Observable.FromAsyncPattern<InType, OutType>( myWcfService.BeginDoStuff, myWcfService.EndDoStuff); func(inData).ObserveOnDispatcher().Subscribe(x => Foo(x)); I am using this in an WPF project, and it works great at runtime. Unfortunately, when trying to unit test methods that use this technique I am experiencing random failures. ~3 out of every five executions of a test that contain this code fails. Here is a sample test (implemented using a Rhino/unity auto-mocking container): [TestMethod()] public void SomeTest() { // arrange var container = GetAutoMockingContainer(); container.Resolve<IMyWcfServiceClient>() .Expect(x => x.BeginDoStuff(null, null, null)) .IgnoreArguments() .Do( new Func<Specification, AsyncCallback, object, IAsyncResult>((inData, asyncCallback, state) => { return new CompletedAsyncResult(asyncCallback, state); })); container.Resolve<IRepositoryServiceClient>() .Expect(x => x.EndRetrieveAttributeDefinitionsForSorting(null)) .IgnoreArguments() .Do( new Func<IAsyncResult, OutData>((ar) => { return someMockData; })); // act var target = CreateTestSubject(container); target.DoMethodThatInvokesService(); // Run the dispatcher for everything over background priority Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.Background, new Action(() => { })); // assert Assert.IsTrue(my operation ran as expected); } The problem that I see is that the code that I specified to run when the async action completed (in this case, Foo(x)), is never called. I can verify this by setting breakpoints in Foo and observing that they are never reached. Further, I can force a long delay after calling DoMethodThatInvokesService (which kicks off the async call), and the code is still never run. I do know that the lines of code invoking the Rx framework were called. Other things I've tried: I have attempted to modify the second last line according to the suggestions here: Reactive Extensions Rx - unit testing something with ObserveOnDispatcher No love. I have added .Take(1) to the Rx code as follows: func(inData).ObserveOnDispatcher().Take(1).Subscribe(x = Foo(x)); This improved my failure rate to something like 1 in 5, but they still occurred. I have rewritten the Rx code to use the plain jane Async pattern. This works, however my developer ego really would love to use Rx instead of boring old begin/end. In the end I do have a work around in hand (i.e. don't use Rx), however I feel that it is not ideal. If anyone has ran into this problem in the past and found a solution, I'd dearly love to hear it.

    Read the article

  • Long-running ASP.NET tasks

    - by John Leidegren
    I know there's a bunch of APIs out there that do this, but I also know that the hosting environment (being ASP.NET) puts restrictions on what you can reliably do in a separate thread. I could be completely wrong, so please correct me if I am, this is however what I think I know. A request typically timeouts after 120 seconds (this is configurable) but eventually the ASP.NET runtime will kill a request that's taking too long to complete. The hosting environment, typically IIS, employs process recycling and can at any point decide to recycle your app. When this happens all threads are aborted and the app restarts. I'm however not sure how aggressive it is, it would be kind of stupid to assume that it would abort a normal ongoing HTTP request but I would expect it to abort a thread because it doesn't know anything about the unit of work of a thread. If you had to create a programming model that easily and reliably and theoretically put a long running task, that would have to run for days, how would you accomplish this from within an ASP.NET application? The following are my thoughts on the issue: I've been thinking a long the line of hosting a WCF service in a win32 service. And talk to the service through WCF. This is however not very practical, because the only reason I would choose to do so, is to send tasks (units of work) from several different web apps. I'd then eventually ask the service for status updates and act accordingly. My biggest concern with this is that it would NOT be a particular great experience if I had to deploy every task to the service for it to be able to execute some instructions. There's also this issue of input, how would I feed this service with data if I had a large data set and needed to chew through it? What I typically do right now is this SELECT TOP 10 * FROM WorkItem WITH (ROWLOCK, UPDLOCK, READPAST) WHERE WorkCompleted IS NULL It allows me to use a SQL Server database as a work queue and periodically poll the database with this query for work. If the work item completed with success, I mark it as done and proceed until there's nothing more to do. What I don't like is that I could theoretically be interrupted at any point and if I'm in-between success and marking it as done, I could end up processing the same work item twice. I might be a bit paranoid and this might be all fine but as I understand it there's no guarantee that that won't happen... I know there's been similar questions on SO before but non really answers with a definitive answer. This is a really common thing, yet the ASP.NET hosting environment is ill equipped to handle long-running work. Please share your thoughts.

    Read the article

  • WPF XAML: how to get StackPanel's children to fill maximum space downward?

    - by Edward Tanguay
    I simply want flowing text on the left, and a help box on the right. The help box should extend all the way to the bottom. If you take out the outer StackPanel below it works great. But for reasons of layout (I'm inserting UserControls dynamically) I need to have the wrapping StackPanel. How do I get the GroupBox to extend down to the bottom of the StackPanel, as you can see I've tried: VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" Height="Auto" XAML: <Window x:Class="TestDynamic033.Test3" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Test3" Height="300" Width="600"> <StackPanel VerticalAlignment="Stretch" Height="Auto"> <DockPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Height="Auto" Margin="10"> <GroupBox DockPanel.Dock="Right" Header="Help" Width="100" Background="Beige" VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" Height="Auto"> <TextBlock Text="This is the help that is available on the news screen." TextWrapping="Wrap" /> </GroupBox> <StackPanel DockPanel.Dock="Left" Margin="10" Width="Auto" HorizontalAlignment="Stretch"> <TextBlock Text="Here is the news that should wrap around." TextWrapping="Wrap"/> </StackPanel> </DockPanel> </StackPanel> </Window> Answer: Thanks Mark, using DockPanel instead of StackPanel cleared it up. In general, I find myself using DockPanel more and more now for WPF layouting, here's the fixed XAML: <Window x:Class="TestDynamic033.Test3" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Test3" Height="300" Width="600" MinWidth="500" MinHeight="200"> <DockPanel VerticalAlignment="Stretch" Height="Auto"> <DockPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Height="Auto" MinWidth="400" Margin="10"> <GroupBox DockPanel.Dock="Right" Header="Help" Width="100" VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" Height="Auto"> <Border CornerRadius="3" Background="Beige"> <TextBlock Text="This is the help that is available on the news screen." TextWrapping="Wrap" Padding="5"/> </Border> </GroupBox> <StackPanel DockPanel.Dock="Left" Margin="10" Width="Auto" HorizontalAlignment="Stretch"> <TextBlock Text="Here is the news that should wrap around." TextWrapping="Wrap"/> </StackPanel> </DockPanel> </DockPanel> </Window>

    Read the article

  • Best ASP.NET E-Commerce Framework (aspDotNetStoreFront, AbleCommerce, BVCommerce, MediaChase,...)

    - by EfficionDave
    I need a full-featured asp.net E-Commerce solution for a project. It needs to be easily extensible as I need to extend it to integrate with a custom Flash based personalization engine. It should also have a great administrative back-end that handles inventory tracking, report, labels, shipping, ... so that the client can really use it as the basis for their business. I've used quite a few different E-Commerce systems including OSCommerce, Zen Cart, Catalook, and eTailer. While I was impressed with Zen Cart and eTailer, this project is big enough that I want to try a more serious, commercial offering. I'm particularly interested in thoughts on aspdotnetstorefront, BVCommerce, AbleCommerce, and MediaChase. [UPDATE] I've done quite a bit of evaluation since I posted this question and will give some of my findings below: BVCommerce - The price point is nice, the product seems fairly well regarded, and I've heard good things about customizing/extending it, it just doesn't seem like it's going to meet my needs as an Top Tier ECommerce solution. After viewing the tutorial videos, the Admin interface just seems too basic and outdated. aspDotNetStorefront - The integration with DotNetNuke was it's biggest selling point for me but after reading peoples opinions, it's integration seems quite weak. It's sounds like there's a significant learning curve and the architecture just doesn't sound like it's as clean as it should be. AbleCommerce - After a rather thorough evaluation, I have selected AbleCommerce for at least my next project. The Admin interface is beautiful and has a nice list of features. The E-Commerce portion of the user side is very well done, highly skin-able (using Master Pages and a couple other techniques), and the checkout workflow has all the features I need while still being very clean. Source to the API can be bought for $500 but you generally won't need it. There are some 3rd party modules available but there's not currently a large market for these. The biggest glaring weakness of AbleCommerce I've found is their CMS capabilities are very limited and not at all well suited for a non-technical user to make most site content updates. But I have a good solution for that that I plan adapting into AbleCommerce. They will automatically generate a site for you to play with to your heart's content for 30 days. It is defniitely worth checking out.

    Read the article

  • How to solve out of memory exception error in Entity FramWork?

    - by programmerist
    Hello; these below codes give whole data of my Rehber datas. But if i want to show web page via Gridview send me out of memory exception error. GenoTip.BAL: public static List<Rehber> GetAllDataOfRehber() { using (GenoTipSatisEntities genSatisCtx = new GenoTipSatisEntities()) { ObjectQuery<Rehber> rehber = genSatisCtx.Rehber; return rehber.ToList(); } } if i bind data directly dummy gridview like that no problem occures every thing is great!!! <asp:GridView ID="gwRehber" runat="server"> </asp:GridView> if above codes send data to Satis.aspx page: using GenoTip.BAL; namespace GenoTip.Web.ContentPages.Satis { public partial class Satis : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { gwRehber.DataSource = SatisServices.GetAllDataOfRehber(); gwRehber.DataBind(); //gwRehber.Columns[0].Visible = false; } } } } but i rearranged my gridview send me out of memory exception!!!! i need this arrangenment to show deta!!! <asp:GridView ID="gwRehber" runat="server"> <Columns> <%-- <asp:TemplateField> <ItemTemplate> <asp:Button runat="server" ID="btnID" CommandName="select" CommandArgument='<%# Eval("ID") %>' Text="Seç" /> </ItemTemplate> </asp:TemplateField>--%> <asp:BoundField DataField="Ad" HeaderText="Ad" /> <asp:BoundField DataField="BireyID" HeaderText="BireyID" Visible="false" /> <asp:BoundField DataField="Degistiren" HeaderText="Degistiren" /> <asp:BoundField DataField="EklemeTarihi" HeaderText="EklemeTarihi" /> <asp:BoundField DataField="DegistirmeTarihi" HeaderText="Degistirme Tarihi" Visible="false" /> <asp:BoundField DataField="Ekleyen" HeaderText="Ekleyen" /> <asp:BoundField DataField="ID" HeaderText="ID" Visible="false" /> <asp:BoundField DataField="Imza" HeaderText="Imza" /> <asp:BoundField DataField="KurumID" HeaderText="KurumID" Visible="false" /> </Columns> </asp:GridView> Error Detail : [OutOfMemoryException: 'System.OutOfMemoryException' türünde özel durum olusturuldu.] System.String.GetStringForStringBuilder(String value, Int32 startIndex, Int32 length, Int32 capacity) +29 System.Convert.ToBase64String(Byte[] inArray, Int32 offset, Int32 length, Base64FormattingOptions options) +146 System.Web.UI.ObjectStateFormatter.Serialize(Object stateGraph) +183 System.Web.UI.ObjectStateFormatter.System.Web.UI.IStateFormatter.Serialize(Object state) +4 System.Web.UI.Util.SerializeWithAssert(IStateFormatter formatter, Object stateGraph) +37 System.Web.UI.HiddenFieldPageStatePersister.Save() +79 System.Web.UI.Page.SavePageStateToPersistenceMedium(Object state) +105 System.Web.UI.Page.SaveAllState() +236 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1099

    Read the article

  • Force SSRS 2008 to use SSRS 2005 CSV rendering

    - by Kash
    We are upgrading our report server from SSRS 2005 to SSRS 2008 R2. I have an issue with CSV export rendering for SSRS 2008 where the SUM of columns are appearing on the right side of the detail values in 2008 instead of the left side like in 2005 as shown in the below blocks. 117 and 131 are the sums of Column2 and Column3 respectively. SSRS 2005 CSV Output Column2_1,Column3_1,Column2,Column3 117,131,1,2 117,131,1,2 117,131,60,23 117,131,30,15 117,131,25,89 SSRS 2008 CSV Output Column2,Column3,Column2_1,Column3_1 1,2,117,131 1,2,117,131 60,23,117,131 30,15,117,131 25,89,117,131 I understand that the CSV renderer has gone through major changes in SSRS 2008 R2 with the support for charts and gauges and more importantly it provides 2 modes: the default Excel mode and Compliant mode. But neither mode helps fix this issue. The Compliant mode was supposed to be closest to that of 2005 but apparently it is not close enough for my case. My Question: Is there a way to force SSRS 2008 fall back a report to a backward compatibility mode so that it exports into a 2005 CSV format? Solution tried: a) Using 2005-based CRIs Based on this article on ExecutionLog2, if SSRS 2008 R2 encounters a report whose auto-upgrade is not possible (e.g. reports that were built with 2005-based CustomReportItem controls), those particular reports will be processed with the old Yukon engine in a "transparent backwards-compatibility mode". It seems like it falls back to its previous version mode (2005) and attempts to render it. So I tried using a 2005-based barcode CustomReportItem and deployed to a SSRS 2008 R2 report server, but it shows the same result as before though it suppressed the barcode. This would be because SSRS 2008 R2 finds a way to suppress part of the report output and displays the rest. It would be great to find a 2005-based CRI that makes SSRS 2008 R2 process it with its old Yukon engine. Please note that quite possibly, even if it uses the "old Yukon processing engine", it might still use the new CSV renderer hence it shows the same output. If that is true, then this option is moot. b) Using XML renderer We can use a custom XML renderer and then use XSLT to convert the xml to appropriate CSV but this would mean that we need to convert all our 200 reports. Hence this is not feasible. Please note that we do not have the option of having SSRS 2005 and SSRS 2008 R2 deployed side by side.

    Read the article

  • How to support both HTTP and HTTPS channels in Flex/BlazeDS?

    - by digitalsanctum
    I've been trying to find the right configuration for supporting both http/s requests in a Flex app. I've read all the docs and they allude to doing something like the following: <default-channels> <channel ref="my-secure-amf"> <serialization> <log-property-errors>true</log-property-errors> </serialization> </channel> <channel ref="my-amf"> <serialization> <log-property-errors>true</log-property-errors> </serialization> </channel> This works great when hitting the app via https but get intermittent communication failures when hitting the same app via http. Here's an abbreviated services-config.xml: <channel-definition id="my-amf" class="mx.messaging.channels.AMFChannel"> <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amf" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <!-- HTTPS requests don't work on IE when pragma "no-cache" headers are set so you need to set the add-no-cache-headers property to false --> <add-no-cache-headers>false</add-no-cache-headers> <!-- Use to limit the client channel's connect attempt to the specified time interval. --> <connect-timeout-seconds>10</connect-timeout-seconds> </properties> </channel-definition> <channel-definition id="my-secure-amf" class="mx.messaging.channels.SecureAMFChannel"> <!--<endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.SecureAMFEndpoint"/>--> <endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <add-no-cache-headers>false</add-no-cache-headers> <connect-timeout-seconds>10</connect-timeout-seconds> </properties> </channel-definition> I'm running with Tomcat 5.5.17 and Java 5. The BlazeDS docs say this is the best practice. Is there a better way? With this config, there seems to be 2-3 retries associated with each channel defined in the default-channels element so it always takes ~20s before the my-amf channel connects via a http request. Is there a way to override the 2-3 retries to say, 1 retry for each channel? Thanks in advance for answers.

    Read the article

  • Problem using ‘useLegacyV2RuntimeActivationPolicy’ & supportedRuntime in an application

    - by Notre
    Hello, I've modified a couple of different applications' .config file like this: <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> When I did this for devenv.exe.config (VS 2005 - don't ask :) ), things work great - most of the Visual Studio used .NET 2.0 but I was able to make use of an assembly targeting .NET 4.0 framework. I tried to do the same thing for a custom .exe, which happens to be based on MS CAB (slightly modified) and has a hybrid mix of WPF and WinForms content. As soon as I modified this application's app config file, I started getting this exception, sometime during application startup: The Undo operation encountered a context that is different from what was applied in the corresponding Set operation. The possible cause is that a context was Set on the thread and not reverted(undone). System.InvalidOperationException: The Undo operation encountered a context that is different from what was applied in the corresponding Set operation. The possible cause is that a context was Set on the thread and not reverted(undone). There's a big long stack trace that doesn't show anything in my application code directly (just a bunch of MS assemblies). If I modify the application's .config file to this: <startup useLegacyV2RuntimeActivationPolicy="true"> </startup> i.e.I remove the supportedRuntime element, then the application doesn't throw this exception. But when I go to the point in my code where I try to load my .NET 4 assembly, if fails with this: System.BadImageFormatException: Could not load file or assembly '' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. I guess this is expected. I have two questions: 1) Any idea why I'm getting the System.InvalidOperationException exception when I modify this application's configuration file to include the supportedRuntime element, adding .NET 4 support and any suggestions on what I can do about it? 2) If the answer is "no idea why/don't know what you can do about it", then is possible for my .NET 3.5 SP1 code (C#) to provide more fine grain support for conditionally adding .NET 4 runtime support for a certain assembly(ies) without converting my entire application to target .NET 4, or without using the declarative config file approach? At some point I would convert the entire application to target .NET 4, but for the short term this is daunting task and I'm hope for some short term solution/hack. Thank you very much for any advice you can give!

    Read the article

  • How to avoid open-redirect vulnerability and safely redirect on successful login (HINT: ASP.NET MVC

    - by Brad B.
    Normally, when a site requires that you are logged in before you can access a certain page, you are taken to the login screen and after successfully authenticating yourself, you are redirected back to the originally requested page. This is great for usability - but without careful scrutiny, this feature can easily become an open redirect vulnerability. Sadly, for an example of this vulnerability, look no further than the default LogOn action provided by ASP.NET MVC 2: [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (MembershipService.ValidateUser(model.UserName, model.Password)) { FormsService.SignIn(model.UserName, model.RememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); // open redirect vulnerability HERE } else { return RedirectToAction("Index", "Home"); } } else { ModelState.AddModelError("", "User name or password incorrect..."); } } return View(model); } If a user is successfully authenticated, they are redirected to "returnUrl" (if it was provided via the login form submission). Here is a simple example attack (one of many, actually) that exploits this vulnerability: Attacker, pretending to be victim's bank, sends an email to victim containing a link, like this: http://www.mybank.com/logon?returnUrl=http://www.badsite.com Having been taught to verify the ENTIRE domain name (e.g., google.com = GOOD, google.com.as31x.example.com = BAD), the victim knows the link is OK - there isn't any tricky sub-domain phishing going on. The victim clicks the link, sees their actual familiar banking website and is asked to logon Victim logs on and is subsequently redirected to http://www.badsite.com which is made to look exactly like victim's bank's website, so victim doesn't know he is now on a different site. http://www.badsite.com says something like "We need to update our records - please type in some extremely personal information below: [ssn], [address], [phone number], etc." Victim, still thinking he is on his banking website, falls for the ploy and provides attacker with the information Any ideas on how to maintain this redirect-on-successful-login functionality yet avoid the open-redirect vulnerability? I'm leaning toward the option of splitting the "returnUrl" parameter into controller/action parts and use "RedirectToRouteResult" instead of simply "Redirect". Does this approach open any new vulnerabilities? Side note: I know this open-redirect may not seem to be a big deal compared to the likes of XSS and CSRF, but us developers are the only thing protecting our customers from the bad guys - anything we can do to make the bad guys' job harder is a win in my book. Thanks, Brad

    Read the article

  • How to troubleshoot a 'System.Management.Automation.CmdletInvocationException'

    - by JamesD
    Does anyone know how best to determine the specific underlying cause of this exception? Consider a WCF service that is supposed to use Powershell 2.0 remoting to execute MSBuild on remote machines. In both cases the scripting environments are being called in-process (via C# for Powershell and via Powershell for MSBuild), rather than 'shelling-out' - this was a specific design decision to avoid command-line hell as well as to enable passing actual objects into the Powershell script. The Powershell script that calls MSBuild is shown below: function Run-MSBuild { [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Build.Engine") $engine = New-Object Microsoft.Build.BuildEngine.Engine $engine.BinPath = "C:\Windows\Microsoft.NET\Framework\v3.5" $project = New-Object Microsoft.Build.BuildEngine.Project($engine, "3.5") $project.Load("deploy.targets") $project.InitialTargets = "DoStuff" # # Set some initial Properties & Items # # Optionally setup some loggers (have also tried it without any loggers) $consoleLogger = New-Object Microsoft.Build.BuildEngine.ConsoleLogger $engine.RegisterLogger($consoleLogger) $fileLogger = New-Object Microsoft.Build.BuildEngine.FileLogger $fileLogger.Parameters = "verbosity=diagnostic" $engine.RegisterLogger($fileLogger) # Run the build - this is the line that throws a CmdletInvocationException $result = $project.Build() $engine.Shutdown() } When running the above script from a PS command prompt it all works fine. However, as soon as the script is executed from C# it fails with the above exception. The C# code being used to call Powershell is shown below (remoting functionality removed for simplicity's sake): // Build the DTO object that will be passed to Powershell dto = SetupDTO() RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create(); using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)) { runspace.Open(); IList errors; using (var scriptInvoker = new RunspaceInvoke(runspace)) { // The Powershell script lives in a file that gets compiled as an embedded resource TextReader tr = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("MyScriptResource")); string script = tr.ReadToEnd(); // Load the script into the Runspace scriptInvoker.Invoke(script); // Call the function defined in the script, passing the DTO as an input object var psResults = scriptInvoker.Invoke("$input | Run-MSBuild", dto, out errors); } } Assuming that the issue was related to MSBuild outputting something that the Powershell runspace can't cope with, I have also tried the following variations to the second .Invoke() call: var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-String", dto, out errors); var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-Null", dto, out errors); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-Null"); I've also looked at using a custom PSHost (based on this sample: http://blogs.msdn.com/daiken/archive/2007/06/22/hosting-windows-powershell-sample-code.aspx), but during debugging I was unable to see any 'interesting' calls to it being made. Do the great and the good of Stackoverflow have any insight that might save my sanity?

    Read the article

  • DataForm commit button is not enabled when data changed.

    - by Grayson Mitchell
    This is a weird problem. I am using a dataform, and when I edit the data the save button is enabled, but the cancel button is not. After looking around a bit I have found that I have to implement the IEditableObject in order to cancel an edit. Great I did that (and it all works), but now the commit button (Save) is grayed out, lol. Anyone have any idea's why the commit button will not activate any more? Xaml <df:DataForm x:Name="_dataForm" AutoEdit="False" AutoCommit="False" CommandButtonsVisibility="All"> <df:DataForm.EditTemplate > <DataTemplate> <StackPanel Name="rootPanel" Orientation="Vertical" df:DataField.IsFieldGroup="True"> <!-- No fields here. They will be added at run-time. --> </StackPanel> </DataTemplate> </df:DataForm.EditTemplate> </df:DataForm> binding DataContext = this; _dataForm.ItemsSource = _rows; ... TextBox textBox = new TextBox(); Binding binding = new Binding(); binding.Path = new PropertyPath("Data"); binding.Mode = BindingMode.TwoWay; binding.Converter = new RowIndexConverter(); binding.ConverterParameter = col.Value.Label; textBox.SetBinding(TextBox.TextProperty, binding); dataField.Content = textBox; // add DataField to layout container rootPanel.Children.Add(dataField); Data Class definition public class Row : INotifyPropertyChanged , IEditableObject { public void BeginEdit() { foreach (var item in _data) { _cache.Add(item.Key, item.Value); } } public void CancelEdit() { _data.Clear(); foreach (var item in _cache) { _data.Add(item.Key, item.Value); } _cache.Clear(); } public void EndEdit() { _cache.Clear(); } private Dictionary<string, object> _cache = new Dictionary<string, object>(); private Dictionary<string, object> _data = new Dictionary<string, object>(); public object this[string index] { get { return _data[index]; } set { _data[index] = value; OnPropertyChanged("Data"); } } public object Data { get { return this; } set { PropertyValueChange setter = value as PropertyValueChange; _data[setter.PropertyName] = setter.Value; } } public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } }

    Read the article

  • List of freely available programming books

    - by Karan Bhangui
    I'm trying to amass a list of programming books with opensource licenses, like Creative Commons, GPL, etc. The books can be about a particular programming language or about computers in general. Hoping you guys could help: Languages BASH Advanced Bash-Scripting Guide (An in-depth exploration of the art of shell scripting) C The C book C++ Thinking in C++ C++ Annotations How to Think Like a Computer Scientist C# .NET Book Zero: What the C or C++ Programmer Needs to Know About C# and the .NET Framework Illustrated C# 2008 (Dead Link) Data Structures and Algorithms with Object-Oriented Design Patterns in C# Threading in C# Common Lisp Practical Common Lisp On Lisp Java Thinking in Java How to Think Like a Computer Scientist Java Thin-Client Programming JavaScript Eloquent JavaScript Haskell Real world Haskell Learn You a Haskell for Great Good! Objective-C The Objective-C Programming Language Perl Extreme Perl (license not specified - home page is saying "freely available") The Mason Book (Open Publication License) Practical mod_perl (CreativeCommons Attribution Share-Alike License) Higher-Order Perl Learning Perl the Hard Way PHP Practical PHP Programming Zend Framework: Survive the Deep End PowerShell Mastering PowerShell Prolog Building Expert Systems in Prolog Adventure in Prolog Prolog Programming A First Course Logic, Programming and Prolog (2ed) Introduction to Prolog for Mathematicians Learn Prolog Now! Natural Language Processing Techniques in Prolog Python Dive Into Python Dive Into Python 3 How to Think Like a Computer Scientist A Byte of Python Python for Fun Invent Your Own Computer Games With Python Ruby Why's (Poignant) Guide to Ruby Programming Ruby - The Pragmatic Programmer's Guide Mr. Neighborly's Humble Little Ruby Book SQL Practical PostgreSQL x86 assembly Paul Carter's tutorial Lua Programming In Lua (for v5 but still largely relevant) Algorithms and Data Structures Algorithms Data Structures and Algorithms with Object-Oriented Design Patterns in Java Planning Algorithms Frameworks/Projects The Django Book The Pylons Book Introduction to Design Patterns in C++ with Qt 4 (Open Publication License) Version control The SVN Book Mercurial: The Definitive Guide Pro Git UNIX / Linux The Art of Unix Programming Linux Device Drivers, Third Edition Others Structure and Interpretation of Computer Programs The Little Book of Semaphores Mathematical Logic - an Introduction An Introduction to the Theory of Computation Developers Developers Developers Developers Linkers and loaders Beej's Guide to Network Programming Maven: The Definitive Guide I will expand on this list as I get comments or when I think of more :D Related: Programming texts and reference material for my Kindle What are some good free programming books? Can anyone recommend a free software engineering book? Edit: Oh I didn't notice the community wiki feature. Feel free to edit your suggestions right in!

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >