Search Results

Search found 9982 results on 400 pages for 'state monad'.

Page 334/400 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • How do I make VC++'s debugger break on exceptions?

    - by Mason Wheeler
    I'm trying to debug a problem in a DLL written in C that keeps causing access violations. I'm using Visual C++ 2008, but the code is straight C. I'm used to Delphi, where if an exception occurs while running under the debugger, the program will immediately break to the debugger and it will give you a chance to examine the program state. In Visual C++, though, all I get is a message in the Output tab: First-chance exception at blah blah blah: Access violation reading location 0x04410000. No breaks, nothing. It just goes and unwinds the stack until it's back in my Delphi EXE, which recognizes something's wrong and alerts me there, but by that point I've lost several layers of call stack and I don't know what's going on. I've tried other debugging techniques, but whatever it's doing is taking place deep within a nested loop inside a C macro that's getting called more than 500 times, and that's just a bit beyond my skill (or my patience) to trace through. I figure there has to be some way to get the "first-chance" exception to actually give me a "chance" to handle it. There's probably some "break immediately on first-chance exceptions" configuration setting I don't know about, but it doesn't seem to be all that discoverable. Does anyone know where it is and how to enable it?

    Read the article

  • question about jQuery droppable/draggable.

    - by FALCONSEYE
    I modified a sample photo manager application. photo manager application Instead of photos, I have employee records coming from a query. My version will let managers mark employees as on vacation, or at work. One of the things I did is to include employee ids like <a href="123">. I get the ids from event.target. This works for the click function but not for the "droppable" function. This is what I have for the click function: $('ul.gallery > li').click(function(ev) { var $item = $(this); var $unid = ev.target; var $target = $(ev.target); if ($target.is('a.ui-icon-suitcase')) { deleteImage($item,$unid); } else if ($target.is('a.ui-icon-arrowreturnthick-1-w')) { recycleImage($item,$unid); } return false; }); ev.target correctly gives the employee id. when i try the same in one of the droppable functions: $gallery.droppable({ accept: '#suitcase li', activeClass: 'custom-state-active', drop: function(ev, ui) { var $unid = ev.target; alert($unid); recycleImage(ui.draggable,$unid); } }); the alert(ui) gives me [object]. What's in this object? How do i get the href out of this? thanks

    Read the article

  • Spring MVC: should service layer be returning operation specific DTO's ?

    - by arrages
    In my Spring MVC application I am using DTO in the presentation layer in order to encapsulate the domain model in the service layer. The DTO's are being used as the spring form backing objects. hence my services look something like this: userService.storeUser(NewUserRequestDTO req); The service layer will translate DTO - Domain object and do the rest of the work. Now my problem is that when I want to retrieve a DTO from the service to perform say an Update or Display I can't seem to find a better way to do it then to have multiple methods for the lookup that return different DTO's like... EditUserRequestDTO userService.loadUserForEdit(int id); DisplayUserDTO userService.loadUserForDisplay(int id); but something does not feel right about this approach. The reason do have separate DTO's is that DisplayUserDTO is strongly typed to be read only and also there are many properties of user that are entities from a lookup table in the db (like city and state) so the DisplayUserDTO would have the string description of the properties while the EditUserRequestDTO will have the id's that will back the select drop down lists in the forms. What do you think? thanks

    Read the article

  • ReaderWriterLockSlim and Pulse/Wait

    - by Jono
    Is there an equivalent of Monitor.Pulse and Monitor.Wait that I can use in conjunction with a ReaderWriterLockSlim? I have a class where I've encapsulated multi-threaded access to an underlying queue. To enqueue something, I acquire a lock that protects the underlying queue (and a couple of other objects) then add the item and Monitor.Pulse the locked object to signal that something was added to the queue. public void Enqueue(ITask task) { lock (mutex) { underlying.Enqueue(task); Monitor.Pulse(mutex); } } On the other end of the queue, I have a single background thread that continuously processes messages as they arrive on the queue. It uses Monitor.Wait when there are no items in the queue, to avoid unnecessary polling. (I consider this to be good design, but any flames (within reason) are welcome if they help me learn otherwise.) private void DequeueForProcessing(object state) { while (true) { ITask task; lock (mutex) { while (underlying.Count == 0) { Monitor.Wait(mutex); } task = underlying.Dequeue(); } Process(task); } } As more operations are added to this class (requiring read-only access to the lock protected underlying), someone suggested using ReaderWriterLockSlim. I've never used the class before, and assuming it can offer some performance benefit, I'm not against it, but only if I can keep the Pulse/Wait design.

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • How to perform Rails model validation checks within model but outside of filters using ledermann-rails-settings and extensions

    - by user1277160
    Background I'm using ledermann-rails-settings (https://github.com/ledermann/rails-settings) on a Rails 2/3 project to extend virtually the model with certain attributes that don't necessarily need to be placed into the DB in a wide table and it's working out swimmingly for our needs. An additional reason I chose this Gem is because of the post How to create a form for the rails-settings plugin which ties ledermann-rails-settings more closely to the model for the purpose of clean form_for usage for administrator GUI support. It's a perfect solution for addressing form_for support although... Something that I'm running into now though is properly validating the dynamic getters/setters before being passed to the ledermann-rails-settings module. At the moment they are saved immediately, regardless if the model validation has actually fired - I can see through script/console that validation errors are being raised. Example For instance I would like to validate that the attribute :foo is within the range of 0..100 for decimal usage (or even a regex). I've found that with the previous post that I can use standard Rails validators (surprise, surprise) but I want to halt on actually saving any values until those are addressed - ensure that the user of the GUI has given 61.43 as a numerical value. The following code has been borrowed from the quoted post. class User < ActiveRecord::Base has_settings validates_inclusion_of :foo, :in => 0..100 def self.settings_attr_accessor(*args) >>SOME SORT OF UNLESS MODEL.VALID? CHECK HERE args.each do |method_name| eval " def #{method_name} self.settings.send(:#{method_name}) end def #{method_name}=(value) self.settings.send(:#{method_name}=, value) end " end >>END UNLESS end settings_attr_accessor :foo end Anyone have any thoughts here on pulling the state of the model at this point outside of having to put this into a before filter? The goal here is to be able to use the standard validations and avoid rolling custom validation checks for each new settings_attr_accessor that is added. Thanks!

    Read the article

  • Controls in UIActionSheet are positioning wrong

    - by Dave
    I need to display a pop with a UISwitch and a Done button when user taps on a button and save the state of the switch when the done button in the pop up is tapped. So I use UIActionSheet to do this - sheet = [[UIActionSheet alloc] initWithTitle:@"Switch Setting" delegate:self cancelButtonTitle:nil destructiveButtonTitle:nil otherButtonTitles:@"Done", nil]; theSwitch = [[UISwitch alloc] init]; [sheet addSubview:theSwitch]; [sheet showInView:self.view]; - (void)actionSheet:(UIActionSheet *)actionSheet clickedButtonAtIndex:(NSInteger)buttonIndex { if (buttonIndex == actionSheet.firstOtherButtonIndex) { theSwitch .on? NSLog(@"Switch is on") : NSLog(@"Switch if off"); } } All this works great. I have problem with positioning. Unfortunately, I don't have a screen shot at the moment to post here. But the switch appears at the very top on the same line with the Action Sheet title and then below that is the Done button. Please someone help me how to position the switch below the Title and increase the Action Sheet size so it looks neat. I need this asap. Thanks.

    Read the article

  • What is the best way to handle the Connections to MySql from c#

    - by srk
    I am working on a c# application which connects to MySql server. There are about 20 functions which will connect to database. This application will be deployed in 200 over machines. I am using the below code to connect to my database which is identical for all the functions. The problem is, i can some connections were not closed and still alive when deployed in 200 over machines. Connection String : <add key="Con_Admin" value="server=test-dbserver; database=test_admindb; uid=admin; password=1Password; Use Procedure Bodies=false;" /> Declaration of the connection string Globally in application [Global.cs] : public static MySqlConnection myConn_Instructor = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); Function to query database : public static DataSet CheckLogin_Instructor(string UserName, string Password) { DataSet dsValue = new DataSet(); //MySqlConnection myConn = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); try { string Query = "SELECT accounts.str_nric AS Nric, accounts.str_password AS `Password`," + " FROM accounts " + " WHERE accounts.str_nric = '" + UserName + "' AND accounts.str_password = '" + Password + "\'"; MySqlCommand cmd = new MySqlCommand(Query, Global.myConn_Instructor); MySqlDataAdapter da = new MySqlDataAdapter(); if (Global.myConn_Instructor.State == ConnectionState.Closed) { Global.myConn_Instructor.Open(); } cmd.ExecuteScalar(); da.SelectCommand = cmd; da.Fill(dsValue); Global.myConn_Instructor.Close(); } catch (Exception ex) { Global.myConn_Instructor.Close(); ExceptionHandler.writeToLogFile(System.Environment.NewLine + "Target : " + ex.TargetSite.ToString() + System.Environment.NewLine + "Message : " + ex.Message.ToString() + System.Environment.NewLine + "Stack : " + ex.StackTrace.ToString()); } return dsValue; }

    Read the article

  • Checking if a console application is still running using the Process class

    - by Ced
    I'm making an application that will monitor the state of another process and restart it when it stops responding, exits, or throws an error. However, I'm having trouble to make it reliably check if the process (Being a C++ Console window) has stopped responding. My code looks like this: public void monitorserver() { while (true) { server.StartInfo = new ProcessStartInfo(textbox_srcdsexe.Text, startstring); server.Start(); log("server started"); log("Monitor started."); while (server.Responding) { if (server.HasExited) { log("server exitted, Restarting."); break; } log("server is running: " + server.Responding.ToString()); Thread.Sleep(1000); } log("Server stopped responding, terminating.."); try { server.Kill(); } catch (Exception) { } } } The application I'm monitoring is Valve's Source Dedicated Server, running Garry's Mod, and I'm over stressing the physics engine to simulate it stopping responding. However, this never triggers the process class recognizing it as 'stopped responding'. I know there are ways to directly query the source server using their own protocol, but i'd like to keep it simple and universal (So that i can maybe use it for different applications in the future). Any help appreciated

    Read the article

  • RhinoMocks: how to test if method was called when using PartialMock

    - by epitka
    I have a clas that is something like this public class MyClass { public virtual string MethodA(Command cmd) { //some code here} public void MethodB(SomeType obj) { // do some work MethodA(command); } } I mocked MyClass as PartialMock (mocks.PartialMock<MyClass>) and I setup expectation for MethodA var cmd = new Command(); //set the cmd to expected state Expect.Call(MyClass.MethodA(cmd)).Repeat.Once(); Problem is that MethodB calls actual implementation of MethodA instead of mocking it up. I must be doing something wrong (not very experienced with RhinoMocks). How do I force it to mock MetdhodA? Here is the actual code: var cmd = new SetBaseProductCoInsuranceCommand(); cmd.BaseProduct = planBaseProduct; var insuredType = mocks.DynamicMock<InsuredType>(); Expect.Call(insuredType.Code).Return(InsuredTypeCode.AllInsureds); cmd.Values.Add(new SetBaseProductCoInsuranceCommand.CoInsuranceValues() { CoInsurancePercent = 0, InsuredType = insuredType, PolicySupplierType = ppProvider }); Expect.Call(() => service.SetCoInsurancePercentages(cmd)).Repeat.Once(); mocks.ReplayAll(); //act service.DefaultCoInsurancesFor(planBaseProduct); //assert service.AssertWasCalled(x => x.SetCoInsurancePercentages(cmd),x=>x.Repeat.Once());

    Read the article

  • ASP.NEt MVC 2 application error on IIS7 works fine on local machine

    - by aspCoolguy
    My ASP.NET MVC2 application is developed using 1. VS 2010 2. Linq To SQL for Models Here is Call controller code: namespace CallTrackMVC.Controllers { public class CallController : Controller { private CallTrackRepository repository; public CallController():this(new CallTrackRepository()) { } public CallController(CallTrackRepository newRepository) { repository = newRepository; } } } Error on IIS7 when browsing the Call Create page is NullReferenceException: Object reference not set to an instance of an object.] CallTrackMVC.Models.ExecOfficeDataContext..ctor() in C:\ClearCase\rartadi_view\STS_Dev_TEST\CallTrackMVC\Models\ExecOffice.designer.cs:71 CallTrackMVC.Controllers.CallController..ctor() in C:\ClearCase\rartadi_view\STS_Dev_TEST\CallTrackMVC\Controllers\CallController.cs:16 [TargetInvocationException: Exception has been thrown by the target of an invocation.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache) +117 System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean skipCheckThis, Boolean fillCache) +247 System.Activator.CreateInstance(Type type, Boolean nonPublic) +106 System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +102 [InvalidOperationException: **An error occurred when trying to create a controller of type 'CallTrackMVC.Controllers.CallController'. Make sure that the controller has a parameterless public constructor.**] System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +541 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +85 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +165 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +80 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +389 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +371 Code in Global.asax is protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } Any suggestion would be a great help.

    Read the article

  • Ajax with Jsf 1.1 implementation

    - by Rohan Ved
    I am using JSF1.1 in that, have this code_ <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h"%> <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f"%> <%@ taglib uri="http://www.azureworlds.org" prefix="azure"%> <%@ taglib uri="http://myfaces.apache.org/tomahawk" prefix="x"%> <%@ taglib uri="http://www.asifqamar.com/jsf/asif" prefix="a"%> ... <x:selectOneMenu value="#{hotelBean.state}"> <f:selectItem itemLabel="Select One" itemValue="" /> <f:selectItem value="#{hotelBean.mapStates }" /> <x:ajax update="city" listener="#{hotelBean.handleCityChange}" /> </x:selectOneMenu> <h:outputText value="City* " /> <x:selectOneMenu id="city" value="#{hotelBean.city}"> <f:selectItem itemLabel="Select One" itemValue="" /> <f:selectItem value="#{hotelBean.mapCities }" /> </x:selectOneMenu> line x:ajax update="city" listener="#{hotelBean.handleCityChange}" is not working , i searched but got JSF1.1 not support for Ajax. then what can i do for this? and i have less knowledge of JS. Thanx

    Read the article

  • Cheap cloning/local branching in Mercurial

    - by Zack
    Hi, Just started working with Mercurial a few days ago and there's something I don't understand. I have an experimental thing I want to do, so the normal thing to do would be to clone my repository, work on the clone and if eventually I want to keep those changes, I'll push them to my main repository. Problem is cloning my repository takes alot of time (we have alot of code) and just compiling the cloned copy would take up to an hour. So I need to somehow work on a different repository but still in my original working copy. Enter local branches. Problem is just creating a local branch takes forever, and working with them isn't all that fun either. Because when moving between local branches doesn't "revert" to the target branch state, I have to issue a hg purge (to remove files that were added in the moved from branch) and then hg update -c (to revert modified files in the moved from branch). (note: I did try PK11 fork of local branch extension, it a simple local branch creation crashes with an exception) At the end of the day, this is just too complex. What are my options?

    Read the article

  • OptimisticLockException in inner transaction ruins outer transaction

    - by Pace
    I have the following code (OLE = OptimisticLockException)... public void outer() { try { middle() } catch (OLE) { updateEntities(); outer(); } } @Transactional public void middle() { try { inner() } catch (OLE) { updateEntities(); middle(); } @Transactional public void inner() { //Do DB operation } inner() is called by other non-transactional methods which is why both middle() and inner() are transactional. As you can see, I deal with OLEs by updating the entities and retrying the operation. The problem I'm having is that when I designed things this way I was assuming that the only time one could get an OLE was when a transaction closed. This is apparently not the case as the call to inner() is throwing an OLE even when the stack is outer()->middle()->inner(). Now, middle() is properly handling the OLE and the retry succeeds but when it comes time to close the transaction it has been marked rollbackOnly by Spring. When the middle() method call finally returns the closing aspect throws an exception because it can't commit a transaction marked rollbackOnly. I'm uncertain what to do here. I can't clear the rollbackOnly state. I don't want to force create a transaction on every call to inner because that kills my performance. Am I missing something or can anyone see a way I can structure this differently? EDIT: To clarify what I'm asking, let me explain my main question. Is it possible to catch and handle OLE if you are inside of an @Transactional method? FYI: The transaction manager is a JpaTransactionManager and the JPA provider is Hibernate.

    Read the article

  • Where clause on joined table used for user defined key/value pairs

    - by Steve Wright
    Our application allows administrators to add “User Properties” in order for them to be able to tailor the system to match their own HR systems. For example, if your company has departments, you can define “Departments” in the Properties table and then add values that correspond to “Departments” such as “Jewelry”, “Electronics” etc… You are then able to assign a department to users. Here is the schema: In this schema, a User can have only one UserPropertyValue per Property, but doesn’t have to have a value for the property. I am trying to build a query that will be used in SSRS 2005 and also have it use the PropertyValues as the filter for users. My query looks like this: SELECT UserLogin, FirstName, LastName FROM Users U LEFT OUTER JOIN UserPropertyValues UPV ON U.ID = UPV.UserID WHERE UPV.PropertyValueID IN (1, 5) When I run this, if the user has ANY of the property values, they are returned. What I would like to have is where this query will return users that have values BY PROPERTY. So if PropertyValueID = 1 is of Department (Jewelry), and PropertyValueID = 5 is of EmploymentType (Full Time), I want to return all users that are in Department Jewelry that are EmployeeType of Full Time, can this be done? Here's a full data example: User A has Department(Jewelry value = 1) and EmploymentType(FullTime value = 5)User B has Department(Electronics value = 2) and EmploymentType(FullTime value = 5)User C has Department(Jewelry value = 1) and EmployementType(PartTime value = 6) My query should only return User A using the above query UPDATE: I should state that this query is used as a dataset in SSRS, so the parameter passed to the query will be @PropertyIDs and it is defined as a multi-value parameter in SSRS. WHERE UPV.PropertyValueID IN (@PropertyIDs)

    Read the article

  • SharePoint Web Part on Masterpage not displaying on SubSite

    - by madatanic
    Hi all, Scenario: - A out-of-the-box DataFormWebPart on masterpage connected to a top-level site List. - A subsite using that same masterpage from top-level site. - Error happens when accessing the sub site as below Stack Trace [InvalidOperationException: Operation is not valid due to the current state of the object.] Microsoft.SharePoint.SPFolder.get_ContentTypeOrder() +488 Microsoft.SharePoint.SPContext.get_ContentTypes() +898 Microsoft.SharePoint.SPContext.get_ContentType() +472 Microsoft.SharePoint.SPContext.get_Fields() +271 Microsoft.SharePoint.WebControls.FormComponent.get_Fields() +44 Microsoft.SharePoint.WebControls.FieldMetadata.get_Field() +419 Microsoft.SharePoint.WebControls.FormField.CreateChildControls() +596 System.Web.UI.Control.EnsureChildControls() +87 Microsoft.SharePoint.WebControls.BaseFieldControl.OnLoad(EventArgs e) +176 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.AddedControl(Control control, Int32 index) +265 System.Web.UI.ControlCollection.Add(Control child) +80 System.Web.UI.Control.AddParsedSubObject(Object obj) +41 Microsoft.SharePoint.WebPartPages.WebPart.AddParsedSubObject(Object obj) +1149 Microsoft.SharePoint.WebPartPages.DataFormWebPart.CreateChildControls() +1267 System.Web.UI.Control.EnsureChildControls() +87 System.Web.UI.Control.PreRenderRecursiveInternal() +44 System.Web.UI.WebControls.WebParts.WebPart.PreRenderRecursiveInternal() +42 System.Web.UI.Control.PreRenderRecursiveInternal() +171 System.Web.UI.Control.PreRenderRecursiveInternal() +171 System.Web.UI.Control.PreRenderRecursiveInternal() +171 System.Web.UI.Control.PreRenderRecursiveInternal() +171 System.Web.UI.Control.PreRenderRecursiveInternal() +171 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +842 I have tried the solution from this site to enable the subsite to access top level site list. Is this a bug? Any help would be appreciated.

    Read the article

  • php form submit and the resend infromation screen

    - by Para
    Hello, I want to ask a best practice question. Suppose I have a form in php with 3 fields say name, email and comment. I submit the form via POST. In PHP I try and insert the date into the database. Suppose the insertion fails. I should now show the user an error and display the form filled in with the data he previously inserted so he can correct his error. Showing the form in it's initial state won't do. So I display the form and the 3 fields are now filled in from PHP with echo or such. Now if I click refresh I get a message saying "Are you sure you want to resend information?". OK. Suppose after I insert the data I don't carry on but I redirect to the same page but with the necessary parameters in the query string. This makes the message go away but I have to carry 3 parameters in the query string. So my question is: How is it better to do this? I want to not carry around lots of parameters in the query string but also not get that error. How can this be done? Should I use cookies to store the form information.

    Read the article

  • Thread Locking CollectionViewSource

    - by Robert
    I added an event handler to my code and it broke all access to the CollectionViewSources in the SystemHTA class saying "The calling thread cannot access this object because a different thread owns it". My class was working when "this.systemHTA = new SystemHTA();" was placed outside of the DeviceManager_StateChanged() function. public partial class MainWindow : Window { private DeviceManager DeviceManager = DeviceManager.Instance; public SystemHTA systemHTA; public MainWindow() { InitializeComponent(); } private void Window_Loaded(object sender, RoutedEventArgs e) { DeviceManager.StateChanged += new EventHandler<DeviceManagerStateChangedEventArgs>(DeviceManager_StateChanged); DeviceManager.Initialize(); } void DeviceManager_StateChanged(object sender, DeviceManagerStateChangedEventArgs e) { if (e.State == DeviceManagerState.Operational) { this.systemHTA = new SystemHTA(); } } private void button1_Click(object sender, RoutedEventArgs e) { this.systemHTA.GetViewSourceTest(); } } public class SystemHTA { private CollectionViewSource _deviceTestSource; public SystemHTA() { _deviceTestSource = new CollectionViewSource(); _deviceTestSource.Source = CreateLoadData<HWController>.ControllerCollection; } public void GetViewSourceTest() { ListCollectionView view = (ListCollectionView)_deviceTestSource.View; //This creates an error saying a thread already owns _deviceTestSource } }

    Read the article

  • Detaching all entities of T to get fresh data

    - by Goran
    Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId - Category.Id. We have available CRUD operations on products (not Categories). If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data). Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways: 1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data). 2) detaching / attaching all Categories from current context. Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached); And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading. Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?

    Read the article

  • Search a string in a file and write the matched lines to another file in Java

    - by Geeta
    For searching a string in a file and writing the lines with matched string to another file it takes 15 - 20 mins for a single zip file of 70MB(compressed state). Is there any ways to minimise it. my source code: getting Zip file entries zipFile = new ZipFile(source_file_name); entries = zipFile.entries(); while (entries.hasMoreElements()) { ZipEntry entry = (ZipEntry)entries.nextElement(); if (entry.isDirectory()) { continue; } searchString(Thread.currentThread(),entry.getName(), new BufferedInputStream (zipFile.getInputStream(entry)), Out_File, search_string, stats); } zipFile.close(); Searching String public void searchString(Thread CThread, String Source_File, BufferedInputStream in, File outfile, String search, String stats) throws IOException { int count = 0; int countw = 0; int countl = 0; String s; String[] str; BufferedReader br2 = new BufferedReader(new InputStreamReader(in)); System.out.println(CThread.currentThread()); while ((s = br2.readLine()) != null) { str = s.split(search); count = str.length - 1; countw += count; //word count if (s.contains(search)) { countl++; //line count WriteFile(CThread,s, outfile.toString(), search); } } br2.close(); in.close(); } -------------------------------------------------------------------------------- public void WriteFile(Thread CThread,String line, String out, String search) throws IOException { BufferedWriter bufferedWriter = null; System.out.println("writre thread"+CThread.currentThread()); bufferedWriter = new BufferedWriter(new FileWriter(out, true)); bufferedWriter.write(line); bufferedWriter.newLine(); bufferedWriter.flush(); } Please help me. Its really taking 40 mins for 10 files using threads and 15 - 20 mins for a single file of 70MB after being compressed. Any ways to minimise the time.

    Read the article

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • Copy recordset data into multiple sheets to avoid problem of maximum rows limit in Excel VBA

    - by Sam
    I am developing reporting application in Excel/vba 2003. VBA code sends search query to database and gets the data through recordset. It will then be copied to one of excel sheet. The rertrieved data looks like as shown below. ProductID--|---DateProcessed--|----State----- 1................|.. 1/1/2010..............|.....Picked Up 1................|.. 1/1/2010..............|.....Forward To Approver 1................|.. 1/2/2010..............|.....Approver Picked Up 1................|.. 1/3/2010..............|.....Approval Completed 2................|.. 1/1/2010..............|.....Picked Up 3................|.. 1/2/2010..............|.....Picked Up 3................|.. 1/2/2010..............|.....Forward To Approver The problem is data retrieved from search query is so huge that it goes above the excel row limit (65536 rows in excel 2003). So I want to split this data into two excel sheets. While spliting the data I want to ensure that the data for same product shoud remain in one sheet. For example, if the last record in the above result set is 65537th record then I also want to move all records for product 3 into new sheet. So sheet1 will contain records for product id 1 and 2 with total records = 65534. Sheet 2 will cotain records for product id 3 - with total records = 2. How can I acheive this in vba? If it is not possible, is there any alternative solution ? Thanks in Advance !

    Read the article

  • GWT in its web-mode throws StatusCodeException but in its hosted-mode , it's working perfect

    - by broshni
    Hi to all, I am new to this forum. Actually I have an issue here regarding my application which is hibernate+struts based application. Recently we have decided to integrate GWT into our application. We are using gwt 1.5.3 build. We have set up everything exactly as we find in documentation and various blogs. Yeah I am now in a very nervy state as gwt in its web- mode is treating me in a very embarassing manner. But it is compiling and working perfectly fine in its hosted-mode and everything works fine as we expected and designed. When I try to integrate GWT with our application i.e. running in tomcat (http://localhost:8080/myapps/example.do? reqCode=takeMeToGWT&userId=12&templateId=10). The gwt page is loaded partially and in the midst it throws an exception i.e. StatusCodeException. This doesn't happen when I run the gwt application in hosted-mode ( http://localhost:8888/com.myapp.gwt.MyApp/Home.html ) . I am using Intellij Idea 8, struts 1.3 ,hibernate 3 and tomcat 5.5 . Expecting your responses at the earliest possible time. Thanking you B Roshnikanta Sharma [email protected]

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >