Search Results

Search found 4681 results on 188 pages for 'handling'.

Page 166/188 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • SignalR Server Error "The ConnectionId is in the incorrect format." with SignalR-ObjC Library

    - by ozzotto
    Before asking a separate question I've done lots of googling about it and added a comment in the already existing stackoverflow question. I have a SignalR Hub (tried both v. 1.1.3 and 2.0.0-rc) in my server with the below code: [HubName("TestHub")] public class TestHub : Hub { [Authorize] public void TestMethod(string test) { //some stuff here Clients.Caller.NotifyOnTestCompleted(); } } The problem persists if I remove the Authorize attribute. And in my iOS client I try to call it with the below code: SRHubConnection *hubConnection = [SRHubConnection connectionWithURL:_baseURL]; SRHubProxy *hubProxy = [hubConnection createHubProxy:@"TestHub"]; [hubProxy on:@"NotifyOnTestCompleted" perform:self selector:@selector(stopConnection)]; hubConnection.started = ^{ [hubProxy invoke:@"TestMethod" withArgs:@[@"test"]]; }; //received, error handling [hubConnection start]; When the app starts the user is not logged in and there is no open SignalR connection. The users logs in by calling a Login service in the server which makes use of WebSecurity.Login method. If the login service returns success I then make the above call to SignalR Hub and I get the server error 500 with description "The ConnectionId is in the incorrect format.". The full server stacktrace is the following: Exception information: Exception type: InvalidOperationException Exception message: The ConnectionId is in the incorrect format. at Microsoft.AspNet.SignalR.PersistentConnection.GetConnectionId(HostContext context, String connectionToken) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.Hubs.HubDispatcher.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(IDictionary`2 environment) at Microsoft.Owin.Mapping.MapMiddleware.<Invoke>d__0.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.IntegratedPipelineContext.EndFinalWork(IAsyncResult ar) at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Request information: Request URL: http://myserverip/signalr/signalr/connect?transport=webSockets&connectionToken=axJs EQMZxpmUopL36owSUkdhNs85E0fyB2XvV5R5znZfXYI/CiPbTRQ3kASc3 mq60cLkZU7coYo1P fbC0U1LR2rI6WIvCNIMOmv/mHut/Unt9mX3XFkQb053DmWgCan5zHA==&connectionData=[{"Name":"testhub"}] Request path: /signalr/signalr/connect User host address: User: Is authenticated: False Authentication Type: Thread account name: IIS APPPOOL\DefaultAppPool Thread information: Thread ID: 14 Thread account name: IIS APPPOOL\DefaultAppPool Is impersonating: True Stack trace: at Microsoft.AspNet.SignalR.PersistentConnection.GetConnectionId(HostContext context, String connectionToken) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.Hubs.HubDispatcher.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(IDictionary`2 environment) at Microsoft.Owin.Mapping.MapMiddleware.<Invoke>d__0.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.IntegratedPipelineContext.EndFinalWork(IAsyncResult ar) at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) I understand this is some kind of authentication and user identity mismatching but up to now I have found no way of solving it. All other questions suggest stoping the opened connection when the user identity changes but as I mentioned above I have no open connection before the user logs in successfully. Any help would be much appreciated. Thank you.

    Read the article

  • Deserializing JSON data to C# using JSON.NET

    - by Derek Utah
    I'm relatively new to working with C# and JSON data and am seeking guidance. I'm using C# 3.0, with .NET3.5SP1, and JSON.NET 3.5r6. I have a defined C# class that I need to populate from a JSON structure. However, not every JSON structure for an entry that is retrieved from the web service contains all possible attributes that are defined within the C# class. I've been being doing what seems to be the wrong, hard way and just picking out each value one by one from the JObject and transforming the string into the desired class property. JsonSerializer serializer = new JsonSerializer(); var o = (JObject)serializer.Deserialize(myjsondata); MyAccount.EmployeeID = (string)o["employeeid"][0]; What is the best way to deserialize a JSON structure into the C# class and handling possible missing data from the JSON source? My class is defined as: public class MyAccount { [JsonProperty(PropertyName = "username")] public string UserID { get; set; } [JsonProperty(PropertyName = "givenname")] public string GivenName { get; set; } [JsonProperty(PropertyName = "sn")] public string Surname { get; set; } [JsonProperty(PropertyName = "passwordexpired")] public DateTime PasswordExpire { get; set; } [JsonProperty(PropertyName = "primaryaffiliation")] public string PrimaryAffiliation { get; set; } [JsonProperty(PropertyName = "affiliation")] public string[] Affiliation { get; set; } [JsonProperty(PropertyName = "affiliationstatus")] public string AffiliationStatus { get; set; } [JsonProperty(PropertyName = "affiliationmodifytimestamp")] public DateTime AffiliationLastModified { get; set; } [JsonProperty(PropertyName = "employeeid")] public string EmployeeID { get; set; } [JsonProperty(PropertyName = "accountstatus")] public string AccountStatus { get; set; } [JsonProperty(PropertyName = "accountstatusexpiration")] public DateTime AccountStatusExpiration { get; set; } [JsonProperty(PropertyName = "accountstatusexpmaxdate")] public DateTime AccountStatusExpirationMaxDate { get; set; } [JsonProperty(PropertyName = "accountstatusmodifytimestamp")] public DateTime AccountStatusModified { get; set; } [JsonProperty(PropertyName = "accountstatusexpnotice")] public string AccountStatusExpNotice { get; set; } [JsonProperty(PropertyName = "accountstatusmodifiedby")] public Dictionary<DateTime, string> AccountStatusModifiedBy { get; set; } [JsonProperty(PropertyName = "entrycreatedate")] public DateTime EntryCreatedate { get; set; } [JsonProperty(PropertyName = "entrydeactivationdate")] public DateTime EntryDeactivationDate { get; set; } } And a sample of the JSON to parse is: { "givenname": [ "Robert" ], "passwordexpired": "20091031041550Z", "accountstatus": [ "active" ], "accountstatusexpiration": [ "20100612000000Z" ], "accountstatusexpmaxdate": [ "20110410000000Z" ], "accountstatusmodifiedby": { "20100214173242Z": "tdecker", "20100304003242Z": "jsmith", "20100324103242Z": "jsmith", "20100325000005Z": "rjones", "20100326210634Z": "jsmith", "20100326211130Z": "jsmith" }, "accountstatusmodifytimestamp": [ "20100312001213Z" ], "affiliation": [ "Employee", "Contractor", "Staff" ], "affiliationmodifytimestamp": [ "20100312001213Z" ], "affiliationstatus": [ "detached" ], "entrycreatedate": [ "20000922072747Z" ], "username": [ "rjohnson" ], "primaryaffiliation": [ "Staff" ], "employeeid": [ "999777666" ], "sn": [ "Johnson" ] }

    Read the article

  • Usage of IcmpSendEcho2 with an asynchronous callback

    - by Ben Voigt
    I've been reading the MSDN documentation for IcmpSendEcho2 and it raises more questions than it answers. I'm familiar with asynchronous callbacks from other Win32 APIs such as ReadFileEx... I provide a buffer which I guarantee will be reserved for the driver's use until the operation completes with any result other than IO_PENDING, I get my callback in case of either success or failure (and call GetCompletionStatus to find out which). Timeouts are my responsibility and I can call CancelIo to abort processing, but the buffer is still reserved until the driver cancels the operation and calls my completion routine with a status of CANCELLED. And there's an OVERLAPPED structure which uniquely identifies the request through all of this. IcmpSendEcho2 doesn't use an OVERLAPPED context structure for asynchronous requests. And the documentation is unclear excessively minimalist about what happens if the ping times out or fails (failure would be lack of a network connection, a missing ARP entry for local peers, ICMP destination unreachable response from an intervening router for remote peers, etc). Does anyone know whether the callback occurs on timeout and/or failure? And especially, if no response comes, can I reuse the buffer for another call to IcmpSendEcho2 or is it forever reserved in case a reply comes in late? I'm wanting to use this function from a Win32 service, which means I have to get the error-handling cases right and I can't just leak buffers (or if the API does leak buffers, I have to use a helper process so I have a way to abandon requests). There's also an ugly incompatibility in the way the callback is made. It looks like the first parameter is consistent between the two signatures, so I should be able to use the newer PIO_APC_ROUTINE as long as I only use the second parameter if an OS version check returns Vista or newer? Although MSDN says "don't do a Windows version check", it seems like I need to, because the set of versions with the new argument aren't the same as the set of versions where the function exists in iphlpapi.dll. Pointers to additional documentation or working code which uses this function and an APC would be much appreciated. Please also let me know if this is completely the wrong approach -- i.e. if either using raw sockets or some combination of IcmpCreateFile+WriteFileEx+ReadFileEx would be more robust.

    Read the article

  • VHDL - Problem with std_logic_vector

    - by wretrOvian
    Hi, i'm coding a 4-bit binary adder with accumulator: library ieee; use ieee.std_logic_1164.all; entity binadder is port(n,clk,sh:in bit; x,y:inout std_logic_vector(3 downto 0); co:inout bit; done:out bit); end binadder; architecture binadder of binadder is signal state: integer range 0 to 3; signal sum,cin:bit; begin sum<= (x(0) xor y(0)) xor cin; co<= (x(0) and y(0)) or (y(0) and cin) or (x(0) and cin); process begin wait until clk='0'; case state is when 0=> if(n='1') then state<=1; end if; when 1|2|3=> if(sh='1') then x<= sum & x(3 downto 1); y<= y(0) & y(3 downto 1); cin<=co; end if; if(state=3) then state<=0; end if; end case; end process; done<='1' when state=3 else '0'; end binadder; The output : -- Compiling architecture binadder of binadder ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): No feasible entries for infix operator "xor". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): Type error resolving infix expression "xor" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in left operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Type error resolving infix expression "or" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): No feasible entries for infix operator "&". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): Type error resolving infix expression "&" as type ieee.std_logic_1164.std_logic_vector. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(39): VHDL Compiler exiting I believe i'm not handling std_logic_vector's correctly. Please tell me how? :(

    Read the article

  • Troubleshooting .NET "Fatal Execution Engine Error"

    - by JYelton
    Summary: I periodically get a .NET Fatal Execution Engine Error on an application which I cannot seem to debug. The dialog that comes up only offers to close the program or send information about the error to Microsoft. I've tried looking at the more detailed information but I don't know how to make use of it. Error: The error is visible in Event Viewer under Applications and is as follows: .NET Runtime version 2.0.50727.3607 - Fatal Execution Engine Error (7A09795E) (80131506) The computer running it is Windows XP Professional SP 3. (Intel Core2Quad Q6600 2.4GHz w/ 2.0 GB of RAM) Other .NET-based projects that lack multi-threaded downloading (see below) seem to run just fine. Application: The application is written in C#/.NET 3.5 using VS2008, and installed via a setup project. The app is multi-threaded and downloads data from multiple web servers using System.Net.HttpWebRequest and its methods. I've determined that the .NET error has something to do with either threading or HttpWebRequest but I haven't been able to get any closer as this particular error seems impossible to debug. I've tried handling errors on many levels, including the following in Program.cs: // handle UI thread exceptions Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException); // handle non-UI thread exceptions AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); // force all windows forms errors to go through our handler Application.SetUnhandledExceptionMode(UnhandledExceptionMode.CatchException); More Notes and What I've Tried... Installed Visual Studio 2008 on the target machine and tried running in debug mode, but the error still occurs, with no hint as to where in source code it occurred. When running the program from its installed version (Release) the error occurs more frequently, usually within minutes of launching the application. When running the program in debug mode inside of VS2008, it can run for hours or days before generating the error. Reinstalled .NET 3.5 and made sure all updates are applied. Broke random cubicle objects in frustration. Rewritten parts of code that deal with threading and downloading in attempts to catch and log exceptions, though logging seemed to aggravate the problem (and never provided any data). Question: What steps can I take to troubleshoot or debug this kind of error? Memory dumps and the like seem to be the next step, but I'm not experienced at interpreting them. Perhaps there's something more I can do in the code to try and catch errors... It would be nice if the "Fatal Execution Engine Error" was more informative, but internet searches have only told me that it's a common error for a lot of .NET-related items.

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • ADO.NET Data Services Entity Framework request error when property setter is internal

    - by Jim Straatman
    I receive an error message when exposing an ADO.NET Data Service using an Entity Framework data model that contains an entity (called "Case") with an internal setter on a property. If I modify the setter to be public (using the entity designer), the data services works fine. I don’t need the entity "Case" exposed in the data service, so I tried to limit which entities are exposed using SetEntitySetAccessRule. This didn’t work, and service end point fails with the same error. public static void InitializeService(IDataServiceConfiguration config) { config.SetEntitySetAccessRule("User", EntitySetRights.AllRead); } The error message is reported in a browser when the .svc endpoint is called. It is very generic, and reads “Request Error. The server encountered an error processing the request. See server logs for more details.” Unfortunately, there are no entries in the System and Application event logs. I found this stackoverflow question that shows how to configure tracing on the service. After doing so, the following NullReferenceExceptoin error was reported in the trace log. Does anyone know how to avoid this exception when including an entity with an internal setter? Blockquote 131076 3 0 2 MOTOJIM http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.TraceHandledException.aspx Handling an exception. 685a2910-19-128703978432492675 System.NullReferenceException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.PopulateMetadata() at System.Data.Services.DataService1.CreateProvider(Type dataServiceType, Object dataSourceInstance, DataServiceConfiguration&amp; configuration) at System.Data.Services.DataService1.EnsureProviderAndConfigForRequest() at System.Data.Services.DataService1.ProcessRequestForMessage(Stream messageBody) at SyncInvokeProcessRequestForMessage(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]&amp; outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet) </StackTrace> <ExceptionString>System.NullReferenceException: Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.P

    Read the article

  • SQLite file locking and DropBox

    - by Alex Jenter
    I'm developing an app in Visual C++ that uses an SQLite3 DB for storing data. Usually it sits in the tray most of the time. I also would like to enable putting my app in a DropBox folder to share it across several PCs. It worked really well up until DropBox has recently updated itself. And now it says that it "can't sync the file in use". The SQLite file is open in my app, but the lock is shared. There are some prepared statements, but all are reset immediately after using step. Is there any way to enable synchronizing of an open SQLite database file? Thanks! Here is the simple wrapper that I use just for testing (no error handling), in case this helps: class Statement { private: Statement(sqlite3* db, const std::wstring& sql) : db(db) { sqlite3_prepare16_v2(db, sql.c_str(), sql.length() * sizeof(wchar_t), &stmt, NULL); } public: ~Statement() { sqlite3_finalize(stmt); } public: void reset() { sqlite3_reset(stmt); } int step() { return sqlite3_step(stmt); } int getInt(int i) const { return sqlite3_column_int(stmt, i); } tstring getText(int i) const { const wchar_t* v = (const wchar_t*)sqlite3_column_text16(stmt, i); int sz = sqlite3_column_bytes16(stmt, i) / sizeof(wchar_t); return std::wstring(v, v + sz); } private: friend class Database; sqlite3* db; sqlite3_stmt* stmt; }; class Database { public: Database(const std::wstring& filename = L"")) : db(NULL) { sqlite3_open16(filename.c_str(), &db); } ~Database() { sqlite3_close(db); } void exec(const std::wstring& sql) { auto_ptr<Statement> st(prepare(sql)); st->step(); } auto_ptr<Statement> prepare(const tstring& sql) const { return auto_ptr<Statement>(new Statement(db, sql)); } private: sqlite3* db; };

    Read the article

  • Bugs in Excel's ActiveX combo boxes?

    - by k.robinson
    I have noticed that I get all sorts of annoying errors when: I have ActiveX comboboxes on a worksheet (not an excel form) The comboboxes have event code linked to them (eg, onchange events) I use their listfillrange or linkedcell properties (clearing these properties seems to alleviate a lot of problems) (Not sure if this is connected) but there is data validation on the targeted linkedcell. I program a fairly complex excel application that does a ton of event handling and uses a lot of controls. Over the months, I have been trying to deal with a variety of bugs dealing with those combo boxes. I can't recall all the details of each instance now, but these bugs tend to involve pointing the listfillrange and linkedcell properties at named ranges, and often have to do with the combo box events triggering at inappropriate times (such as when application.enableevents = false). These problems seemed to grow bigger in Excel 2007, so that I had to give up on these combo boxes entirely (I now use combo boxes contained in user forms, rather than directly on the sheets). Has anyone else seen similar problems? If so, was there a graceful solution? I have looked around with Google and so far haven't spotted anyone with similar issues. Some of the symptoms I end up seeing are: Excel crashing when I start up (involves combobox_onchange, listfillrange-named range on another different sheet, and workbook_open interactions). (note, I also had some data validation on the the linked cells in case a user edited them directly.) Excel rendering bugs (usually when the combo box changes, some cells from another sheet get randomly drawn over the top of the current sheet) Sometimes it involves the screen flashing entirely to another sheet for a moment. Excel losing its mind (or rather, the call stack) (related to the first bullet point). Sometimes when a function modifies a property of the comboboxes, the combobox onchange event fires, but it never returns control to the function that caused the change in the first place. The combobox_onchange events are triggered even when application.enableevents = false. Events firing when they shouldn't (I posted another question on stack overflow related to this). At this point, I am fairly convinced that ActiveX comboboxes are evil incarnate and not worth the trouble. I have switched to including these comboboxes inside a userform module instead. I would rather inconvenience users with popup forms than random visual artifacts and crashing (with data loss).

    Read the article

  • MSSQL: How to copy a file (pdf, doc, txt...) stored in a varbinary(max) field to a file in a CLR sto

    - by user193655
    I ask this question as a followup of this question. A solution that uses bcp and xp_cmdshell, that is not my desired solution, has been posted here: stackoverflow.com/questions/828749/ms-sql-server-2005-write-varbinary-to-file-system (sorry i cannot post a second hyperlink since my reputation is les than 10). I am new to c# (since I am a Delphi developer) anyway I was able to create a simple CLR stored procedures by following a tutorial. My task is to move a file from the client file system to the server file system (the server can be accessed using remote IP, so I cannot use a shared folder as destination, this is why I need a CLR stored procedure). So I plan to: 1) store from Delphi the file in a varbinary(max) column of a temporary table 2) call the CLR stored procedure to create a file at the desired path using the data contained in the varbinary(max) field Imagine I need to move C:\MyFile.pdf to Z:\MyFile.pdf, where C: is a harddrive on local system and Z: is an harddrive on the server. I provide the code below (not working) that someone can modify to make it work? Here I suppose to have a table called MyTable with two fields: ID (int) and DATA (varbinary(max)). Please note it doesn't make a difference if the table is a real temporary table or just a table where I temporarly store the data. I would appreciate if some exception handling code is there (so that I can manage an "impossible to save file" exception). I would like to be able to write a new file or overwrite the file if already existing. [Microsoft.SqlServer.Server.SqlProcedure] public static void VarbinaryToFile(int TableId) { using (SqlConnection connection = new SqlConnection("context connection=true")) { connection.Open(); SqlCommand command = new SqlCommand("select data from mytable where ID = @TableId", connection); command.Parameters.AddWithValue("@TableId", TableId); // This was the sample code I found to run a query //SqlContext.Pipe.ExecuteAndSend(command); // instead I need something like this (THIS IS META_SYNTAX!!!): SqlContext.Pipe.ResultAsStream.SaveToFile('z:\MyFile.pdf'); } } (one subquestion is: is this approach coorect or there is a way to directly pass the data to the CLR stored procedure so I don't need to use a temp table?) If the subquestion's answer is No, could you describe the approach of avoiding a temp table? So is there a better way then the one I describe above (=temp table + Stored procedure)? A way to directly pass the dataastream from the client application to the CLR stored procedure? (my files can be any size but also very big)

    Read the article

  • Can/should one disable namespace validation in Axis2 clients using ADB databinding?

    - by RJCantrell
    I have an document/literal Axis 1 service which I'm consuming with an Axis 2 client (ADB databinding, generated from WSDL2Java). It receives a valid XML response, but in parsing that XML, I get the error "Unexpected Subelement" for any type which doesn't have a namespace defined in the response. I can resolve the error by manually changing the (auto-generated by Axis2) type-validation line to only check the type name and not the namespace as well, but is there a non-manual way to skip this? This service is consumed properly by other Axis 1 clients. Here's the response, with bits in ALL CAPS having been anonymized. Some types have namespaces, some have empty-string namespaces, and some have none at all. Can I control this via the service's WSDL, or in Axis2's WSDL2Java handling of the WSDL? Is there a fundamental mismatch between Axis 1 and Axis2? The response below looks correct, except perhaps for where that type (anonymized as TWO below) is nested within itself. <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <SERVICENAMEResponse xmlns="http://service.PROJECT.COMPANY.com"> <SERVICENAMEReturn xmlns=""> <ONE></ONE> <TWO><TWO> <THREE>2</THREE> <FOUR>-10</FOUR> <FIVE>6</FIVE> <SIX>1</SIX> </TWO></TWO> <fileName></fileName> </SERVICENAMEReturn></SERVICENAMEResponse> </soapenv:Body></soapenv:Envelope> I did not have to modify the validation of the "SERVICENAMEResponse" object because it has the correct namespace specified, but I have to so for all its subelements. The manual modification that I can make (one per type) to fix this is to change: if (reader.isStartElement() && new javax.xml.namespace.QName("http://dto.PROJECT.COMPANY.com","ONE").equals(reader.getName())){ to: if (reader.isStartElement() && new javax.xml.namespace.QName("ONE").equals(reader.getName())){

    Read the article

  • mysql never releases memory

    - by Ishu
    I have a production server clocking about 4 million page views per month. The server has got 8GB of RAM and mysql acts as a database. I am facing problems in handling mysql to take this load. I need to restart mysql twice a day to handle this thing. The problem with mysql is that it starts with some particular occupation, the memory consumed by mysql keeps on increasing untill it reaches the maximum it can consume and then mysql stops responding slowly or does not respond at all, which freezes the server. All my tables are indexed properly and there are no long queries. I need some one to help on how to go about debugging what to do here. All my tables are myisam. I have tried configuring the parameters key_buffer etc but to no rescue. Any sort of help is greatly appreciated. Here are some parameters which may help. mysql --version mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 mysql> show variables; +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | NO | | have_bdb | YES | | have_blackhole_engine | NO | | have_compress | YES | | have_crypt | YES | | have_csv | NO | | have_dynamic_loading | YES | | have_example_engine | NO | | have_federated_engine | NO | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_merge_engine | YES | | have_ndbcluster | NO | | have_openssl | DISABLED | | have_ssl | DISABLED | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | | init_connect | | | init_file | | | init_slave | | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 2621440000 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | lc_time_names | en_US | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | ON | | log_bin_trust_function_creators | OFF | | log_error | | | log_queries_not_using_indexes | OFF | | log_slave_updates | OFF | | log_slow_queries | ON | | log_warnings | 1 | | long_query_time | 8 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 0 | | max_allowed_packet | 8388608 | | max_binlog_cache_size | 4294963200 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 400 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 16777216 | | max_insert_delayed_threads | 20 | | max_join_size | 4294967295 | | max_length_for_sort_data | 1024 | | max_prepared_stmt_count | 16382 | | max_relay_log_size | 0 | | max_seeks_for_key | 4294967295 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 4294967295 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 2146435072 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 16777216 | | myisam_stats_method | nulls_unequal | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 2000 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /var/run/mysqld/mysqld.pid | | plugin_dir | | | port | 3306 | | preload_buffer_size | 32768 | | profiling | OFF | | profiling_history_size | 15 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 134217728 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 4096 | | read_buffer_size | 2097152 | | read_only | OFF | | read_rnd_buffer_size | 8388608 | | relay_log | | | relay_log_index | | | relay_log_info_file | relay-log.info | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | secure_file_priv | | | server_id | 1 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /var/lib/mysql/mysql.sock | | sort_buffer_size | 2097152 | | sql_big_selects | ON | | sql_mode | | | sql_notes | ON | | sql_warnings | OFF | | ssl_ca | | | ssl_capath | | | ssl_cert | | | ssl_cipher | | | ssl_key | | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | system_time_zone | CST | | table_cache | 256 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 8 | | thread_stack | 196608 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | /tmp/ | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.77-log | | version_bdb | Sleepycat Software: Berkeley DB 4.1.24: (January 29, 2009) | | version_comment | Source distribution | | version_compile_machine | i686 | | version_compile_os | redhat-linux-gnu | | wait_timeout | 28800 | +---------------------------------+------------------------------------------------------------+

    Read the article

  • XStream JavaBeanConverter not serializing properties

    - by Steve Foster
    Hello All, Attempting to use XStream's JavaBeanConverter and running into an issue. Most likely I'm missng something simple, or not understanding XStream's converter handling well enough. @XStreamAlias("test") public class TestObject { private String foo; public String getFoo() { return foo; } public void setFoo(String foo) { this.foo = foo; } } public void test() throws Exception { XStream x = new XStream(new XppDriver()); x.autodetectAnnotations(true); x.processAnnotations(TestObject.class); x.registerConverter(new JavaBeanConverter(x.getMapper())); TestObject o = new TestObject(); o.setFoo("bar"); String xml = x.toXML(o); System.out.println(xml); /* Expecting... <test> <foo>bar</foo> </test> But instead getting... <test> <foo/> </test> */ } I tried adding a trace on the TestObject.getFoo() method and it appears it is being called by XStream, but the data isn't being written to the output stream. After looking at the source for JavaBeanConverter, it looks like my implementation should work, which leads me to believe I haven't configured something correctly during the XStream setup. Am I just missing something simple? Thanks! Edit Also, if it helps, I'm using the following Maven deps for this... <dependency> <groupId>org.apache.servicemix.bundles</groupId> <artifactId>org.apache.servicemix.bundles.xstream</artifactId> <version>1.3_3</version> </dependency> <dependency> <groupId>org.apache.servicemix.bundles</groupId> <artifactId>org.apache.servicemix.bundles.xpp3</artifactId> <version>1.1.4c_3</version> </dependency>

    Read the article

  • Synchronization requirements for FileStream.(Begin/End)(Read/Write)

    - by Doug McClean
    Is the following pattern of multi-threaded calls acceptable to a .Net FileStream? Several threads calling a method like this: ulong offset = whatever; // different for each thread byte[] buffer = new byte[8192]; object state = someState; // unique for each call, hence also for each thread lock(theFile) { theFile.Seek(whatever, SeekOrigin.Begin); IAsyncResult result = theFile.BeginRead(buffer, 0, 8192, AcceptResults, state); } if(result.CompletedSynchronously) { // is it required for us to call AcceptResults ourselves in this case? // or did BeginRead already call it for us, on this thread or another? } Where AcceptResults is: void AcceptResults(IAsyncResult result) { lock(theFile) { int bytesRead = theFile.EndRead(result); // if we guarantee that the offset of the original call was at least 8192 bytes from // the end of the file, and thus all 8192 bytes exist, can the FileStream read still // actually read fewer bytes than that? // either: if(bytesRead != 8192) { Panic("Page read borked"); } // or: // issue a new call to begin read, moving the offsets into the FileStream and // the buffer, and decreasing the requested size of the read to whatever remains of the buffer } } I'm confused because the documentation seems unclear to me. For example, the FileStream class says: Any public static members of this type are thread safe. Any instance members are not guaranteed to be thread safe. But the documentation for BeginRead seems to contemplate having multiple read requests in flight: Multiple simultaneous asynchronous requests render the request completion order uncertain. Are multiple reads permitted to be in flight or not? Writes? Is this the appropriate way to secure the location of the Position of the stream between the call to Seek and the call to BeginRead? Or does that lock need to be held all the way to EndRead, hence only one read or write in flight at a time? I understand that the callback will occur on a different thread, and my handling of state, buffer handle that in a way that would permit multiple in flight reads. Further, does anyone know where in the documentation to find the answers to these questions? Or an article written by someone in the know? I've been searching and can't find anything. Relevant documentation: FileStream class Seek method BeginRead method EndRead IAsyncResult interface

    Read the article

  • Adding UIViewController.view to another view causes orientation problems

    - by Bob Vork
    Short version: I'm alloc/init/retaining a new UIViewController in one UIViewControllers viewDidLoad method, adding the new View to self.view. This usually works, but it seems to mess up orientation change handling of my iPad app. Longer version: I'm building a fairly complex iPad application, involving a lot of views and viewcontrollers. After running into some difficulties adjusting to the device orientation, I made a simple XCode project to figure out what the problem is. Firstly, I have read the Apple Docs on this subject (a small document called "Why won't my UIViewController rotate with the device?"), and while I do believe it has something to do with one of the reasons listed there, I'm not really sure how to fix it. In my test project I have an appDelegate, a rootViewController, and a UISplitViewController with two custom viewControllers. I use a button on the rootViewController to switch to the splitViewController, and from there I can use a button to switch back to the rootViewController. So far everything is great, i.e. all views adjust to the device orientation. However, in the right viewController of the splitViewController, I use the viewDidLoad method to initialize some other viewControllers, and add their views to its own view: self.newViewController = [[UIViewController new] autorelease]; [newViewController.view setBackgroundColor:[UIColor yellowColor]]; [self.view addSubview:newViewController.view]; This is where things go wrong. Somehow, after adding this view, adjusting to device orientation is messy. On startup everything is fine, after I switch to the splitViewController everything is still fine, but as soon as I switch back to the rootViewController it's all over. I have tried (almost) everything regarding retaining and releasing the viewcontroller, but nothing seems to fix it. As you can see from the code above, I have declared the newViewController as a property, but the same happens if I don't. Shouldn't I be adding a ViewController's view to my own view at all? That would really mess up my project, as I have a lot of viewControllers doing all sorts of things. Any help on this would be greatly appreciated...

    Read the article

  • SVN naming convention: repository, branches, tags

    - by LookitsPuck
    Hey all! Just curious what your naming conventions are for the following: Repository name Branches Tags Right now, we're employing the following standards with SVN, but would like to improve on it: Each project has its own repository Each repository has a set of directories: tags, branches, trunk Tags are immutable copies of the the tree (release, beta, rc, etc.) Branches are typically feature branches Trunk is ongoing development (quick additions, bug fixes, etc.) Now, with that said, I'm curious how everyone is not only handling the naming of their repositories, but also their tags and branches. For example, do you employ a camel case structure for the project name? So, if your project is something like Backyard Baseball for Youngins, how do you handle that? backyardBaseballForYoungins backyard_baseball_for_youngins BackyardBaseballForYoungins backyardbaseballforyoungins That seems rather trivial, but it's a question. If you're going with the feature branch paradigm, how do you name your feature branches? After the feature itself in plain English? Some sort of versioning scheme? I.e. say you want to add functionality to the Backyard Baseball app that allows users to add their own statistics. What would you call your branch? {repoName}/branches/user-add-statistics {repoName}/branches/userAddStatistics {repoName}/branches/user_add_statistics etc. Or: {repoName}/branches/1.1.0.1 If you go the version route, how do you correlate the version numbers? It seems that feature branches wouldn't benefit much from a versioning schema, being that 1 developer could be working on the "user add statistics" functionality, and another developer could be working on the "admin add statistics" functionality. How are these do branch versions named? Are they better off being: {repoName}/branches/1.1.0.1 - user add statistics {repoName}/branches/1.1.0.2 - admin add statistics And once they're merged into the trunk, the trunk might increment appropriately? Tags seem like they'd benefit the most from version numbers. With that being said, how are you correlating the versions for your project (whether it be trunk, branch, tag, etc.) with SVN? I.e. how do you, as the developer, know that 1.1.1 has admin add statistics, and user add statistics functionality? How are these descriptive and linked? It'd make sense for tags to have release notes in each tag since they're immutable. But, yeah, what are your SVN policies going forward?

    Read the article

  • My ashx stopped working (Works locally, but not online)

    - by Madi D.
    i have a simple ASHX file that returns pictures upon request (mainly a counter), previously the code simply fetched pre-made pictures(already uploaded and available) and sent them to the requester.. i just modified the code,so it would take a base picture, do some modifications on it, save it to the server, then serve it to the user.. tested it locally and it worked like a charm. however when i uploaded the code to my hosting service (Godaddy..) it didnt work. Can someone point the problem out to me? Note: ASHX worked with me before,so i know the web.config and IIS are handling them properly, however i think i am missing something .. Code: <%@ WebHandler Language="C#" Class="NWEmpPhotoHandler" %> using System; using System.Web; using System.IO; using System.Drawing; using System.Drawing.Imaging; public class NWEmpPhotoHandler : IHttpHandler { public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext ctx) { try { //Some Code to fetch # of vistors from DB int x = 10,000; // # of Vistors, fetched from DB string numberOfVistors = (1000 + x).ToString(); string filePath = ctx.Server.MapPath("counter.jpg"); Bitmap myBitmap = new Bitmap(filePath); Graphics g = Graphics.FromImage(myBitmap); g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias; StringFormat strFormat = new StringFormat(); g.DrawString(numberOfVistors , new Font("Tahoma", 24), Brushes.Maroon, new RectangleF(55, 82, 500, 500),null); string PathToSave = ctx.Server.MapPath("counter-" + numberOfVistors + ".jpg"); saveJpeg(PathToSave, myBitmap, 100); if (File.Exists(PathToSave)) { ctx.Response.ContentType = "image/jpg"; ctx.Response.WriteFile(PathToSave); //ctx.Response.OutputStream.Write(picByteArray, 0, picByteArray.Length); } } catch (ArgumentException exe) { } } private ImageCodecInfo getEncoderInfo(string mimeType) { // Get image codecs for all image formats ImageCodecInfo[] codecs = ImageCodecInfo.GetImageEncoders(); // Find the correct image codec for (int i = 0; i < codecs.Length; i++) if (codecs[i].MimeType == mimeType) return codecs[i]; return null; } private void saveJpeg(string path, Bitmap img, long quality) { // Encoder parameter for image quality EncoderParameter qualityParam = new EncoderParameter(Encoder.Quality, quality); // Jpeg image codec ImageCodecInfo jpegCodec = this.getEncoderInfo("image/jpeg"); if (jpegCodec == null) return; EncoderParameters encoderParams = new EncoderParameters(1); encoderParams.Param[0] = qualityParam; img.Save(path, jpegCodec, encoderParams); } }

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on ServerFault to get feedback from the administator side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • SerialPort ReadLine() after Thread.Sleep() goes crazy

    - by Mat
    Hi everybody, I've been fighting with this issue for a day and I can't find answer for it. I am trying to read data from GPS device trough COM port in Compact Framework C#. I am using SerialPort class (actually my own ComPort class boxing SerialPort, but it adds only two fields I need, nothing special). Anyway.. I am running while loop in a separate thread which reads line from the port, analyze NMEA data, print them, catch all exceptions and then I Sleep(200) the thread, cause I need CPU for other threads... Without Sleep it works fine, but uses 100% CPU.. When I dont use Sleep after few minutes the output from COM port looks like this: GPGSA,A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F GSA,A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F A,A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F ,18,12,271,24,24,05,020,24,14,04,326,25,11,03,023,*76 A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F 3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F 09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F ,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F as you can see the same message is read few times but cut. I wonder what I'm doing wrong... My port configuration: port.ReadBufferSize = 4096; port.BaudRate = 4800; port.DataBits = 8; port.Parity = Parity.None; port.StopBits = StopBits.One; port.NewLine = "\r\n"; port.ReadTimeout = 1000; port.ReceivedBytesThreshold = 100000; And my reading function: private void processGps(){ while (!closing) { //reconnect if needed try { string sentence = port.ReadLine(); //here print the sentence //analyze the sentence (this takes some time 50-100ms) } catch (TimeoutException) { Thread.Sleep(0); } catch (IOException ioex) { //handling IO exception (some info on the screen) } Thread.Sleep(200); } } There is some more stuff in this function like reconnection if the device is lost etc.. but it is not called when the GPS is connected properly.. I was trying port.DiscardInBuffer(); after some blocks of code (in TimeoutException, after read..) Did anyone had similar problem? I really dont know what I'm doing wrong.. The only way to get rig of it is removing the last Sleep... Thanks in advance! Best Regards, Mat

    Read the article

  • Encoding::UndefinedConversionError from email body

    - by raam86
    using mail for ruby I am getting this message: mail.rb:22:in `encode': "\xC7" from ASCII-8BIT to UTF-8 (Encoding::UndefinedConversionError) from mail.rb:22:in `<main>' If I remove encode I get a message ruby /var/lib/gems/1.9.1/gems/bson-1.7.0/lib/bson/bson_ruby.rb:63:in `rescue in to_utf8_binary': String not valid utf-8: "<div dir=\"ltr\"><div class=\"gmail_quote\">l<br><br><br><div dir=\"ltr\"><div class=\"gmail_quote\"><br><br><br><div dir=\"ltr\"><div class=\"gmail_quote\"><br><br><br><div dir=\"ltr\"><div dir=\"rtl\">\xC7\xE1\xE4\xD5 \xC8\xC7\xE1\xE1\xDB\xC9 \xC7\xE1\xDA\xD1\xC8\xED\xC9</div></div>\r\n</div><br></div>\r\n</div><br></div>\r\n</div><br></div>" (BSON::InvalidStringEncoding) This is my code: require 'mail' require 'mongo' connection = Mongo::Connection.new db = connection.db("DB") db = Mongo::Connection.new.db("DB") newsCollection = db["news"] Mail.defaults do retriever_method :pop3, :address => "pop.gmail.com", :port => 995, :user_name => 'my_username', :password => '*****', :enable_ssl => true end emails = Mail.last #Checks if email is multipart and decods accordingly. Put to extract UTF8 from body plain_part = emails.multipart? ? (emails.text_part ? emails.text_part.body.decoded : nil) : emails.body.decoded html_part = emails.html_part ? emails.html_part.body.decoded : nil mongoMessage = {"date" => emails.date.to_s , "subject" => emails.subject , "body" => plain_part.encode('UTF-8') } msgID = newsCollection.insert(mongoMessage) #add the document to the database and returns it's ID puts msgID For English and Hebrew it works perfectly but it seems gmail is sending arabic with different encoding. Replacing UTF-8 with ASCII-8BIT gives a similar error. I get the same result when using plain_part for plain email messages. I am handling emails from one specific source so I can put html_part with confidence it's not causing the error. To make it extra weird Subject in Arabic is rendered perfectly. What encoding should I use?

    Read the article

  • C# Nested Property Accessing overloading OR Sequential Operator Overloading

    - by Tim
    Hey, I've been searching around for a solution to a tricky problem we're having with our code base. To start, our code resembles the following: class User { int id; int accountId; Account account { get { return Account.Get(accountId); } } } class Account { int accountId; OnlinePresence Presence { get { return OnlinePresence.Get(accountId); } } public static Account Get(int accountId) { // hits a database and gets back our object. } } class OnlinePresence { int accountId; bool isOnline; public static OnlinePresence Get(int accountId) { // hits a database and gets back our object. } } What we're often doing in our code is trying to access the account Presence of a user by doing var presence = user.Account.Presence; The problem with this is that this is actually making two requests to the database. One to get the Account object, and then one to get the Presence object. We could easily knock this down to one request if we did the following : var presence = UserPresence.Get(user.id); This works, but sort of requires developers to have an understanding of the UserPresence class/methods that would be nice to eliminate. I've thought of a couple of cool ways to be able to handle this problem, and was wondering if anyone knows if these are possible, if there are other ways of handling this, or if we just need to think more as we're coding and do the UserPresence.Get instead of using properties. Overload nested accessors. It would be cool if inside the User class I could write some sort of "extension" that would say "any time a User object's Account property's Presence object is being accessed, do this instead". Overload the . operator with knowledge of what comes after. If I could somehow overload the . operator only in situations where the object on the right is also being "dotted" it would be great. Both of these seem like things that could be handled at compile time, but perhaps I'm missing something (would reflection make this difficult?). Am I looking at things completely incorrectly? Is there a way of enforcing this that removes the burden from the user of the business logic? Thanks! Tim

    Read the article

  • Design question for WinForms (C#) app, using Entity Framework

    - by cdotlister
    I am planning on writing a small home budget application for myself, as a learning excercise. I have built my database (SQL Server), and written a small console application to interact with it, and test out scenarios on my database. Once I am happy, my next step would be to start building the application - but I am already wondering what the best/standard design would be. I am palnning on using Entity Framework for handling my database entities... then linq to sql/objects for getting the data, all running under a WinForms (for now) application. My plan (I've never used EF... and most of my development background is Web apps) is to have my database... with Entity Framework in it's own project.. which has the connection to the database. This project would expose methods such as 'GetAccount()', 'GetAccount(int accountId)' etc. I'd then have a service project that references my EF project. And on top of that, my GUI project, which makes the calls to my service project. But I am stuck. Lets say I have a screen that displays a list of Account types (Debit, Credit, Loan...). Once I have selected one, the next drop down shows a list of accounts I have that suite that account type. So, my OnChange event on my DropDown on the account type control will make a call to the serviceLayer project, 'GetAccountTypes()', and I would expect back a List< of Account Types. However, the AccountType object ... what is that? That can't be the AccountType object from my EF project, as my GUI project doesn't have reference to it. Would I have to have some sort of Shared Library, shared between my GUI and my Service project, with a custom built AccountType object? The GUI can then expect back a list of these. So my service layer would have a method: public List<AccountType> GetAccountTypes() That would then make a call to a custom method in my EF project, which would probably be the same as the above method, except, it returns an list of EF.Data.AccountType (The Entity Framework generated Account Type object). The method would then have the linq code to get the data as I want it. Then my service layer will get that object, and transform it unto my custom AccountType object, and return it to the GUI. Does that sound at all like a good plan?

    Read the article

  • How do I use .htaccess RewriteRule to change underscores to dashes

    - by soopadoubled
    I'm working on a site, and its CMS used to save new page urls using the underscore character as a word seperator. Despite the fact that Google now treats underscore as a word seperator, the SEO powers that be are demanding the site use dashes instead. This is very easy to do within the CMS, and I can of course change all existing URLs saved in the MySQL database that serves the CMS. My problem lies in writing a .htaccess rule that will 301 old style underscore seperated links to the new style hyphenated verstion. I had success using the answers to this Stack Overflow question on other sites, using: RewriteRule ^([^_]*)_([^_]*_.*) $1-$2 [N] RewriteRule ^([^_]*)_([^_]*)$ /$1-$2 [L,R=301] However this CMS site uses a lot of existing rules to produce clean URLs, and I can't get this working in conjunction with the existing rule set. .htaccess currently looks like this: Options FollowSymLinks # RewriteOptions MaxRedirects=50 RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^www\.mydomain\.co\.uk$ [NC] RewriteRule (.*) http://www.mydomain.co.uk/$1 [R=301,L] #trailing slash enforcement RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !# RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.mydomain.co.uk/$1/ [L,R=301] RewriteRule ^test/([0-9]+)(/)?$ test_htaccess.php?year=$1 [nc] RewriteRule ^index(/)?$ index.php RewriteRule ^department/([^/]*)/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=ecom.details&mode=$1&$2=$3 [nc] RewriteRule ^department/([^/]*)(/)?$ ecom/index.php?action=ecom.details&mode=$1 [nc] RewriteRule ^product/([^/]*)/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=ecom.pdetails&mode=$1&$2=$3 [nc] RewriteRule ^product/([^/]*)(/)?$ ecom/index.php?action=ecom.pdetails&mode=$1 [nc] RewriteRule ^content/([^/]*)(/)?$ ecom/index.php?action=ecom.cdetails&mode=$1 [nc] RewriteRule ([^/]*)/action/([^/]*)/([^/]*)/([^/]*)/([^/]*)(/)?$ $1/index.php?action=$2&mode=$3&$4=$5 [nc] RewriteRule ([^/]*)/action/([^/]*)/([^/]*)(/)?$ $1/index.php?action=$2&mode=$3 [nc] RewriteRule ([^/]*)/action/([^/]*)(/)?$ $1/index.php?action=$2 [nc] RewriteRule ^eaction/([^/]*)/([^/]*)/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=$1&mode=$2&$3=$4 [nc] RewriteRule ^eaction/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=$1&mode=$2 [nc] RewriteRule ^action/([^/]*)/([^/]*)(/)?$ index.php?action=$1&mode=$2 [nc] RewriteRule ^sid/([^/]*)(/)?$ index.php?sid=$1 [nc] ## Error Handling ## #RewriteRule ^error/([^/]*)(/)?$ index.php?action=error&mode=$1 [nc] # ----------------------------------- Content Section ------------------------------ # #RewriteRule ^([^/]*)(/)?$ index.php?action=cms&mode=$1 [nc] RewriteRule ^accessibility(/)?$ index.php?action=cms&mode=accessibility RewriteRule ^terms(/)?$ index.php?action=cms&mode=conditions RewriteRule ^privacy(/)?$ index.php?action=cms&mode=privacy RewriteRule ^memberpoints(/)?$ index.php?action=cms&mode=member_points RewriteRule ^contactus(/)?$ index.php?action=contactus RewriteRule ^sitemap(/)?$ index.php?action=sitemap ErrorDocument 404 /index.php?action=error&mode=content ExpiresDefault "access plus 3 days" All page URLS are in one of the 3 following formats: http://www.mydomain.com/department/some_page_address/ http://www.mydomain.com/product/some_page_address/ http://www.mydomain.com/content/some_page_address/ I'm sure I am missing something obvious, but at this level my regex and mod_rewrite skills clearly aren't up to par. Any ideas would be greatly appreciated!

    Read the article

  • Custom property editors do not work for request parameters in Spring MVC?

    - by dvd
    Hello, I'm trying to create a multiaction web controller using Spring annotations. This controller will be responsible for adding and removing user profiles and preparing reference data for the jsp page. @Controller public class ManageProfilesController { @InitBinder public void initBinder(WebDataBinder binder) { binder.registerCustomEditor(UserAccount.class,"account", new UserAccountPropertyEditor(userManager)); binder.registerCustomEditor(Profile.class, "profile", new ProfilePropertyEditor(profileManager)); logger.info("Editors registered"); } @RequestMapping("remove") public void up( @RequestParam("account") UserAccount account, @RequestParam("profile") Profile profile) { ... } @RequestMapping("") public ModelAndView defaultView(@RequestParam("account") UserAccount account) { logger.info("Default view handling"); ModelAndView mav = new ModelAndView(); logger.info(account.getLogin()); mav.addObject("account", account); mav.addObject("profiles", profileManager.getProfiles()); mav.setViewName(view); return mav; } ... } Here is the part of my webContext.xml file: <context:component-scan base-package="ru.mirea.rea.webapp.controllers" /> <context:annotation-config/> <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <value> ... /home/users/manageProfiles=users.manageProfilesController </value> </property> </bean> <bean id="users.manageProfilesController" class="ru.mirea.rea.webapp.controllers.users.ManageProfilesController"> <property name="view" value="home\users\manageProfiles"/> </bean> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" /> However, when i open the mapped url, i get exception: java.lang.IllegalArgumentException: Cannot convert value of type [java.lang.String] to required type [ru.mirea.rea.model.UserAccount]: no matching editors or conversion strategy found I use spring 2.5.6 and plan to move to the Spring 3.0 in some not very distant future. However, according to this JIRA https://jira.springsource.org/browse/SPR-4182 it should be possible already in spring 2.5.1. The debug shows that the InitBinder method is correctly called. What am i doing wrong?

    Read the article

  • Singleton: How should it be used

    - by Loki Astari
    Edit: From another question I provided an answer that has links to a lot of questions/answers about singeltons: More info about singletons here: So I have read the thread Singletons: good design or a crutch? And the argument still rages. I see Singletons as a Design Pattern (good and bad). The problem with Singleton is not the Pattern but rather the users (sorry everybody). Everybody and their father thinks they can implement one correctly (and from the many interviews I have done, most people can't). Also because everybody thinks they can implement a correct Singleton they abuse the Pattern and use it in situations that are not appropriate (replacing global variables with Singletons!). So the main questions that need to be answered are: When should you use a Singleton How do you implement a Singleton correctly My hope for this article is that we can collect together in a single place (rather than having to google and search multiple sites) an authoritative source of when (and then how) to use a Singleton correctly. Also appropriate would be a list of Anti-Usages and common bad implementations explaining why they fail to work and for good implementations their weaknesses. So get the ball rolling: I will hold my hand up and say this is what I use but probably has problems. I like "Scott Myers" handling of the subject in his books "Effective C++" Good Situations to use Singletons (not many): Logging frameworks Thread recycling pools /* * C++ Singleton * Limitation: Single Threaded Design * See: http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf * For problems associated with locking in multi threaded applications * * Limitation: * If you use this Singleton (A) within a destructor of another Singleton (B) * This Singleton (A) must be fully constructed before the constructor of (B) * is called. */ class MySingleton { private: // Private Constructor MySingleton(); // Stop the compiler generating methods of copy the object MySingleton(MySingleton const& copy); // Not Implemented MySingleton& operator=(MySingleton const& copy); // Not Implemented public: static MySingleton& getInstance() { // The only instance // Guaranteed to be lazy initialized // Guaranteed that it will be destroyed correctly static MySingleton instance; return instance; } }; OK. Lets get some criticism and other implementations together. :-)

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >