Search Results

Search found 1974 results on 79 pages for 'mfc serialization'.

Page 56/79 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Thoughts about MVC

    - by ayyash
    so i figured this one out, as a newcomer to the web development scene from the telecom biz, where we dealt with low level hardware API's, and a C++ fanboy, i tend to be bothered by automagical code, yes i appreciate the effort that went into it, and i certainly appreciate the luxury it provides to get more things done, but i just don't get it. so i decided to change that, and start investigating the new MVC based web apps, and at first it was like hitting a brick wall, i knew MVC from MFC days, so i'm familiar with the pattern, but i just couldn't get my head around the web version of it, till i came to realize the routing, is actually a separate feature to be inspected, much like understanding how LINQ works by better understanding anonymous objects. and so this article serve as an introduction to the following blogs where i share my views of how asp.net routing works, and then leverage that to the MVC level, and play around that field for a bit. as with most of my shared knowladge that may seem trivial to some, but i guess a newcomer's point of view can be useful for some folks out there.

    Read the article

  • What tasks should be explicitly mentioned in a job reference? [closed]

    - by Martin
    Glossary A job reference (see also the german version) is a letter from the (former) employer that states what the employee did, and how well he did it. There are oh so weird rules here on how to phrase stuff therein, but this is not what this question is about. Question I hope this can even be generally answered, but even if country/region specific, I think there is enough international know-how on this site to get useful answers for different regions. I was wondering how detailed the tasks a programmer / developer did should be spelled out in a job reference. (After all, they can be spelled out in all detail in a CV when applying for a new job.) So how much detail is usual for a job reference? Example Developed Windows applications in C++ or Developed Windows Desktop Applications using C++ with MS Visual Studio 2005 and MFC, utilising Boost 1.47 and specif library xyz, focusing on subsystem abc for numerical calculations of ... etc. What makes more sense?

    Read the article

  • What are Collaboration Data Objects (CDO)?

    - by Pranav
    Collaboration Data Objects or CDO, is a component that enables messaging between applications. It's something like the MFC we have in VC++ that enables us to prefer a simpler interface compared to the WIN32 API which, as an interface, still requires lots of escalation work by developers (yet very robust!). CDO is primarily built to simply the creations of messaging applications and we should keep in mind that CDO is NOT a new messaging model but is BUILT ON the MAPI architecture. It is just an extended interface that collaborates with MAPI and simplifies the programming task at hand for creation of messaging applications. CDO replaced Microsoft's earlier Active Messaging. CDO 1.2 enables us to play around with Data, send, receive emails and a host of other functions like rendering in exchange functionalities into HTML and do loads of other stuff. If you've got some firsthand experiences, a couple of tips will be great and will defiantly further my knowledge base in this area and hopefully get me a more refined understanding. Some pointers on MAPI will be pretty cool.

    Read the article

  • Is Windows 7 written in Xaml? [closed]

    - by Jordan
    Is Windows 7 written using Xaml? Edit Wow, so much hate for such a little question. I'm not an idiot. I know what XAML is. I use it every day. I was just curious whether some of the visual features were laid out with XAML, or if XAML was incorporated in some way to the product. I'm sorry if it is a programming sin not to know what Windows 7 was written in. I mean, I know Windows XP et al used C++/C, I've used Win32 quite a lot (and MFC and .NET).

    Read the article

  • Web technologies on GUI apps

    - by Apalala
    I developed many GUI applications for the Windows platform during my early professional career, and saw several GUI frameworks come, have whole magazines devoted to them, and then fade away. MFC is iconic. Tasked with writing yet another GUI application, I starter researching cross-platform frameworks like Qt and WxWindows. I found the same steep learning curves I knew from before, and tooling doesn't help much in building a functional and elegant user interface because its clumsy and complicated. But people are building beautiful and functional UIs on the Web all the time (look at this site!). The standards, the libraries, and the tools are certainly there. My thought and my question: Why not write a GUI in which most of the UI is handled by an embedded browser? I already know that the Qt widgets support a large part of CSS and JavaScript, and programmers with good knowledge about web development are relatively easy to find, ..., so... Have you done something like that before? What's your experience/advise?

    Read the article

  • Software requirements for replicating a Windows application

    - by gpuguy
    I developed an application using Windows Form in C++ (IDE MSVC 2010). Some part of application also has MFC, and OpenCv. I want to send the application to my cleint for interim testing on his own machine. I have not developed any installer for the same, and so I will be sending him the.EXE file. I want that the client should not face any difficulty in replicating the experiment, and thus saves his time. Can somebody suggest me what all softwares(such as, MSVC, .NET Framework, Windows SDK etc) should already be installed on the client's machine for successfull testing of the application? Note: OS (Windows 7) and hardware is exactly same at both sides.

    Read the article

  • Strategies for porting application from Win32 API to GTK+

    - by Vitor Braga
    I have a legacy application written in C, using the raw Win32 API. The general level of abstraction is low and raw dependency on <windows.h> is common. I would like to port this application to GTK+. There are any kind of guidelines or best practices on how to do this? I've previously ported a MFC application to Qt, but the application was very abstracted - it draw it's own set of widgets, for example - and initial porting was very straightforward. I've been thinking at first using Wine to build a native Linux executable and then trying to slowly refactor it into a GTK+ app. Does some one have best practices or previous experiences to share about this?

    Read the article

  • My Brother printer doesn't accept quality settings anymore - what can I do?

    - by rearlight
    I have a Brother MFC-465CN network printer. It uses the brother-cups-wrapper-bh7 and brother-lpr-drivers-bh7drivers. Now, when I print (which works perfectly fine) I try to print in "fast normal" settings to save some ink. I use LibreOffice and Ubuntus default PDF-viewer to print and set the settings in the print dialog manually. "Fast normal" is the default printing setting (in the Ubuntu GUI printing config). But the printer always prints in "normal" or even "fine" quality settings which takes forever and uses much more ink. So, what can I do about that? Thanks for your help in advance!

    Read the article

  • Determining an application's dependencies

    - by gpuguy
    I have developed an application using Windows Forms in C++ (IDE MS VC++ 2010). Some parts of the application also use MFC, and OpenCV. I want to send the application to my cleint for interim testing on his own machine. I have not developed any installer for the application, so I will be sending him an .EXE file. I want the client to not face any difficulties in replicating the environment, and therefore not lose any time. Can somebody suggest me what software (such as MS VC++ Runtime, .NET Framework, Windows SDK, etc.) should be installed on the client's machine for successfull testing of the application? Note: The OS (Windows 7) and hardware are exactly the same on both sides.

    Read the article

  • Should I install ia32-libs or lib32stdc++ or can I just use --force-all for 32-bit printer driver

    - by William
    I got a Brother printer MFC class. The brother website only provides 32-bit drivers but I installed 64-bit Ubuntu. It says I need to install "ia32-libs" or "lib32stdc++" to install the 32-bit drivers onto 64-bit Ubuntu. Elsewhere I've read that I don't need to install these packages, I can just use --force-all when installing, but I don't know of the accuracy of this information. My questions: 1) do I HAVE to install "ia32-libs" or "lib32stdc++" or can I use --force-all to make 32-bit drivers install on 64-bit ubuntu? 2) if I do have to install "ia32-libs" or "lib32stdc++", which should I install? Which is better, which is recommended by ubuntu experts?

    Read the article

  • Will learning wxpython worth it in future? [on hold]

    - by user108437
    As we know that microsoft has been pushing Windows 8.1 which strongly uses XAML to design the app and for windows desktop mode WPF is another framework (which probably some thinks it fails) However, in old times, developer write windows form software using MFC or something alike that they have to do their own main loop, etc, etc, and I recently loves python and learning python certainly worth it, since there are still ironpython out there that uses .NET, but I am not sure whether my move to also learn wxpython for building windows software that does not requires .NET worth it also i notice wxpython is somehow old and still uses python 2.7, while today, python already version 3.3, beside that the books are old book published in 2007, and there seems no much hype on building windows form without .NET anymore because .NET is mostly preinstalled in new windows version. So my humble question is, whether should I learn python + wxpython or only python? Is there any benefit that I might not notice for capable in writing windows application that does not use .NET?

    Read the article

  • Visual C++, CMap object save to blob column

    - by ilansch
    I have a MFC CMap object, each object stores 160K~ entries of long data. I need to store it on Oracle SQL. we decided to save it as a blob. since we do not want to make additional table. we also thought about saving it as local file and point the SQL column to that file, but we rather just keep it as blob on the server and clear the table every couple of weeks. The table has a sequential key as ID, and 2 column of time. I need to add the blob column in order to store on every row that CMap. Can you recommend a guide to do so (read/write Map to blob or maybe a clob) ? Thanks.

    Read the article

  • What type of things can cause sgen msbuild task to fail intermittantly with Access Violation?

    - by Mark Allanson
    In our MSBuild file for our project we sgen an assembly containing classes used during xml serialization. The classes are generated via xsd.exe. We use the following SGen task configuration. <SGen ToolPath="$(SdkPath)" ShouldGenerateSerializer="true" UseProxyTypes="false" BuildAssemblyName="AssemblyName.dll" BuildAssemblyPath="Outputs" ContinueOnError="false" /> Intermittantly the following error is thrown when executing the msbuild script on our build server. Originally this error might have occurred once out of every 50 (CI) builds, recently the frequency has been increasing and it now occurs maybe 5-6 out of every 10 builds. The size of the assembly that is being Sgenned is about 410k (circa 35,000 lines of generated code), and when successfull the serialization assembly is about 1.7M in size. When it fails, the output is as follows: Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. E:\Path_ToBuild_Workspace\SolutionBuild.MSBuild(74,5): error MSB6006: "sgen.exe" exited with code -1073741819. We are using Hudson to manage our builds, so the msbuild and sgen processes are therefore spwaned by the Hudson.exe. There's not much out there on the interwebs regarding this type of error from SGen. Certainly nothing concrete.

    Read the article

  • Why does the OnDeserialization not fire for XML Deserialization?

    - by Jonathan
    I have a problem which I have been bashing my head against for the better part of three hours. I am almost certain that I've missed something blindingly obvious... I have a simple XML file: <?xml version="1.0" encoding="utf-8"?> <WeightStore xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Records> <Record actual="150" date="2010-05-01T00:00:00" /> <Record actual="155" date="2010-05-02T00:00:00" /> </Records> </WeightStore> I have a simple class structure: [Serializable] public class Record { [XmlAttribute("actual")] public double weight { get; set; } [XmlAttribute("date")] public DateTime date { get; set; } [XmlIgnore] public double trend { get; set; } } [Serializable] [XmlRoot("WeightStore")] public class SimpleWeightStore { [XmlArrayAttribute("Records")] private List<Record> records = new List<Record>(); public List<Record> Records { get { return records; } } [OnDeserialized()] public void OnDeserialized_Method(StreamingContext context) { // This code never gets called Console.WriteLine("OnDeserialized"); } } I am using these in both calling code and in the class files: using System.Xml.Serialization; using System.Runtime.Serialization; I have some calling code: SimpleWeightStore weight_store_reload = new SimpleWeightStore(); TextReader reader = new StringReader(xml); XmlSerializer deserializer = new XmlSerializer(weight_store.GetType()); weight_store_reload = (SimpleWeightStore)deserializer.Deserialize(reader); The problem is that I am expecting OnDeserialized_Method to get called, and it isn't. I suspect it might have something to do with the fact that it's XML deserialization rather than Runtime deserialization, and perhaps I am using the wrong attribute name, but I can't find out what it might be. Any ideas, folks?

    Read the article

  • Implementing Model-level caching

    - by Byron
    I was posting some comments in a related question about MVC caching and some questions about actual implementation came up. How does one implement a Model-level cache that works transparently without the developer needing to manually cache, yet still remains efficient? I would keep my caching responsibilities firmly within the model. It is none of the controller's or view's business where the model is getting data. All they care about is that when data is requested, data is provided - this is how the MVC paradigm is supposed to work. (Source: Post by Jarrod) The reason I am skeptical is because caching should usually not be done unless there is a real need, and shouldn't be done for things like search results. So somehow the Model itself has to know whether or not the SELECT statement being issued to it worthy of being cached. Wouldn't the Model have to be astronomically smart, and/or store statistics of what is being most often queried over a long period of time in order to accurately make a decision? And wouldn't the overhead of all this make the caching useless anyway? Also, how would you uniquely identify a query from another query (or more accurately, a resultset from another resultset)? What about if you're using prepared statements, with only the parameters changing according to user input? Another poster said this: I would suggest using the md5 hash of your query combined with a serialized version of your input arguments. This would require twice the number of serialization options. I was under the impression that serialization was quite expensive, and for large inputs this might be even worse than just re-querying. And is the minuscule chance of collision worth worrying about? Conceptually, caching in the Model seems like a good idea to me, but it seems in practicality the developer should have direct control over caching and write it into the controller. Thoughts/ideas? Edit: I'm using PHP and MySQL if that helps to narrow your focus.

    Read the article

  • Serializing QGraphicsScene contents

    - by Rob
    I am using the Qt QGraphicsScene class, adding pre-defined items such as QGraphicsRectItem, QGraphicsLineItem, etc. and I want to serialize the scene contents to disk. However, the base QGraphicsItem class (that the other items I use derive from) doesn't support serialization so I need to roll my own code. The problem is that all access to these objects is via a base QGraphicsItem pointer, so the serialization code I have is horrible: QGraphicsScene* scene = new QGraphicsScene; scene->addRect(QRectF(0, 0, 100, 100)); scene->addLine(QLineF(0, 0, 100, 100)); ... QList<QGraphicsItem*> list = scene->items(); foreach (QGraphicsItem* item, items) { if (item->type() == QGraphicsRectItem::Type) { QGraphicsRectItem* rect = qgraphicsitem_cast<QGraphicsRectItem*>(item); // Access QGraphicsRectItem members here } else if (item->type() == QGraphicsLineItem::Type) { QGraphicsLineItem* line = qgraphicsitem_cast<QGraphicsLineItem*>(item); // Access QGraphicsLineItem members here } ... } This is not good code IMHO. So, instead I could create an ABC class like this: class Item { public: virtual void serialize(QDataStream& strm, int version) = 0; }; class Rect : public QGraphicsRectItem, public Item { public: void serialize(QDataStream& strm, int version) { // Serialize this object } ... }; I can then add Rect objects using QGraphicsScene::addItem(new Rect(,,,)); But this doesn't really help me as the following will crash: QList<QGraphicsItem*> list = scene->items(); foreach (QGraphicsItem* item, items) { Item* myitem = reinterpret_class<Item*>(item); myitem->serialize(...) // FAIL } Any way I can make this work?

    Read the article

  • Reflecting over classes in .NET produces methods only differing by a modifier

    - by mrjoltcola
    I'm a bit boggled by something, I hope the CLR gearheads can help. Apparently my gears aren't big enough. I have a reflector utility that generates assembly stubs for Cola for .NET, and I find classes have methods that only differ by a modifier, such as virtual. Example below, from Oracle.DataAccess.dll, method GetType(): class OracleTypeException : System.SystemException { virtual string ToString (); virtual System.Exception GetBaseException (); virtual void set_Source (string value); virtual void GetObjectData (System.Runtime.Serialization.SerializationInfo info, System.Runtime.Serialization.StreamingContext context); virtual System.Type GetType (); // here virtual bool Equals (object obj); virtual int32 GetHashCode (); System.Type GetType (); // and here } What is this? I have not been able to reproduce this with C# and it causes trouble for Cola as it thinks GetType() is a redefinition, since the signature is identical. My method reflector starts like this: static void DisplayMethod(MethodInfo m) { if ( // Filter out things Cola cannot yet import, like generics, pointers, etc. m.IsGenericMethodDefinition || m.ContainsGenericParameters || m.ReturnType.IsGenericType || !m.ReturnType.IsPublic || m.ReturnType.IsArray || m.ReturnType.IsPointer || m.ReturnType.IsByRef || m.ReturnType.IsPointer || m.ReturnType.IsMarshalByRef || m.ReturnType.IsImport ) return; // generate stub signature // [snipped] }

    Read the article

  • Rtti data manipulation and consistency in Delphi 2010

    - by Coco
    Has anyone an idea, how I can make TValue using a reference to the original data? In my serialization project, I use (as suggested in XML-Serialization) a generic serializer which stores TValues in an internal tree-structure (similar to the MemberMap in the example). This member-tree should also be used to create a dynamic setup form and manipulate the data. My idea was to define a property for the Data: TDataModel <T> = class {...} private FData : TValue; function GetData : T; procedure SetData (Value : T); public property Data : T read GetData write SetData; end; The implementation of the GetData, SetData Methods: procedure TDataModel <T>.SetData (Value : T); begin FData := TValue.From <T> (Value); end; procedure TDataModel <T>.GetData : T; begin Result := FData.AsType <T>; end; Unfortunately, the TValue.From method always makes a copy of the original data. So whenever the application makes changes to the data, the DataModel is not updated and vice versa if I change my DataModel in a dynamic form, the original data is not affected. Sure I could always use the Data property before and after changing anything, but as I use lot of Rtti inside my DataModel, I do not realy want to do this anytime. Perhaps someone has a better suggestion?

    Read the article

  • How to efficiently convert String, integer, double, datetime to binary and vica versa?

    - by Ben
    Hi, I'm quite new to C# (I'm using .NET 4.0) so please bear with me. I need to save some object properties (their properties are int, double, String, boolean, datetime) to a file. But I want to encrypt the files using my own encryption, so I can't use FileStream to convert to binary. Also I don't want to use object serialization, because of performance issues. The idea is simple, first I need to somehow convert objects (their properties) to binary (array), then encrypt (some sort of xor) the array and append it to the end of the file. When reading first decrypt the array and then somehow convert the binary array back to object properties (from which I'll generate objects). I know (roughly =) ) how to convert these things by hand and I could code it, but it would be useless (too slow). I think the best way would be just to get properties' representation in memory and save that. But I don't know how to do it using C# (maybe using pointers?). Also I though about using MemoryStream but again I think it would be inefficient. I am thinking about class Converter, but it does not support toByte(datetime) (documentation says it always throws exception). For converting back I think the only options is class Converter. Note: I know the structure of objects and they will not change, also the maximum String length is also known. Thank you for all your ideas and time. EDIT: I will be storing only parts of objects, in some cases also parts of different objects (a couple of properties from one object type and a couple from another), thus I think that serialization is not an option for me.

    Read the article

  • C++ iterators, default initialization and what to use as an uninitialized sentinel.

    - by Hassan Syed
    The Context I have a custom template container class put together from a map and vector. The map resolves a string to an ordinal, and the vector resolves an ordinal (only an initial string to ordinal lookup is done, future references are to the vector) to the entry. The entries are modified intrusively to contain a a bool "assigned" and an iterator_type which is a const_iterator to the container class's map. My container class will use RCF's serialization code (which models boost::serialization) to serialize my container classes to nodes in a network. Serializing iterator's is not possible, or a can of worms, and I can easily regenerate them onces the vectors and maps are serialized on the remote site. The Question I need to default initialize, and be able to test that the iterator has not been assigned to (if it is assigned it is valid, if not it is invalid). Since map iterators are not invalidated upon operations performed on it (unless of course items are removed :D) am I to assume that map<x,y>::end() is a valid sentinel (regardless of the state of the map -- i.e., it could be empty) to initialize to ? I will always have access to the parent map, I'm just unsure wheather end() is the same as the map contents change. I don't want to use another level of indirection (--i.e., boost::optional) to achieve my goal, I'd rather forego compiler checks to correct logic, but it would be nice if I didn't need to. Misc This question exists, but most of its content seems non-sense. Assigning a NULL to an iterator is invalid according to g++ and clang++. This is another similar question, but it focuses on the common use-cases of iterators, which generally tends to be using the iterator to iterate, ofcourse in this use-case the state of the container isn't meant to change whilst iteration is going on.

    Read the article

  • Blocking error in celery

    - by dmitry
    I have no idea what's this. Python 2.7 + django-1.5.1 + httpd + rabbitmq + django-celery==3.0.17 Tasks are not executed because of some error. Below is celery's log. Maybe someone has faced it before. [2013-06-24 17:10:03,792: CRITICAL/MainProcess] Can't decode message body: AttributeError("'JoinInfo' object has no attribute '__dict__'",) (type:u'application/x-python-serialize' encoding:u'binary' raw:'\'\\x80\\x02}q\\x01(U\\x07expiresq\\x02NU\\x03utcq\\x03\\x88U\\x04argsq\\x04cdjango.contrib.auth.models\\nUser\\nq\\x05)\\x81q\\x06}q\\x07(U\\x08usernameq\\x08X\\x19\\x00\\x00\\[email protected]\\nfirst_nameq\\tX\\x05\\x00\\x00\\x00BibbyU\\tlast_nameq\\nX\\x08\\x00\\x00\\x00OffshoreU\\r_client_cacheq\\x0bccopy_reg\\n_reconstructor\\nq\\x0ccbongoregistration.models\\nClient\\nq\\rc__builtin__\\nobject\\nq\\x0eN\\x87Rq\\x0f}q\\x10(h\\nX\\x08\\x00\\x00\\x00OffshoreU\\x1bpurchase_confirmation_emailq\\x11X\\x1f\\x00\\x00\\[email protected]\\x1dpurchase_confirmation_email_1q\\x12X!\\x00\\x00\\[email protected]\\x06_stateq\\x13cdjango.db.models.base\\nModelState\\nq\\x14)\\x81q\\x15}q\\x16(U\\x06addingq\\x17\\x89U\\x02dbq\\x18U\\x07defaultq\\x19ubU\\x0buser_ptr_idq\\x1aJ\\xb4\\xa2\\x03\\x00U\\x08is_staffq\\x1b\\x89U\\x08postcodeq\\x1cX\\x08\\x00\\x00\\x00AB11 5BSU\\x0cdegree_limitq\\x1dK\\x06U\\x07messageq\\x1eX\\xd1E\\x00\\x00<table id="container" style="margin: 0px; padding: 0px; width: 100%; background-color: #ffffff;" cellspacing="0" cellpadding="0"... (22911b)'') Traceback (most recent call last): File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/messaging.py", line 556, in _receive_callback decoded = None if on_m else message.decode() File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/transport/base.py", line 147, in decode self.content_encoding, accept=self.accept) File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/serialization.py", line 187, in decode return decode(data) File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/serialization.py", line 74, in pickle_loads return load(BytesIO(s)) AttributeError: 'JoinInfo' object has no attribute '__dict__'

    Read the article

  • Block Cascade Json Serealize?

    - by CrazyJoe
    I have this Class: public class User { public string id{ get; set; } public string name{ get; set; } public string password { get; set; } public string email { get; set; } public bool is_broker { get; set; } public string branch_id { get; set; } public string created_at{get; set;} public string updated_at{get; set;} public UserGroup UserGroup {get;set;} public UserAddress UserAddress { get; set; } public List<UserContact> UserContact {get; set;} public User() { UserGroup = new UserGroup(); UserAddress = new UserAddress(); UserContact = new List<UserContact>(); } } I like to Serealize Only properties , how i block serealization of UserGroup, UserAdress, asn UserContact??? This is my Serealization function: public static string Serealize<T>(T obj) { System.Runtime.Serialization.Json.DataContractJsonSerializer serializer = new System.Runtime.Serialization.Json.DataContractJsonSerializer(obj.GetType()); MemoryStream ms = new MemoryStream(); serializer.WriteObject(ms, obj); return Encoding.UTF8.GetString(ms.ToArray(), 0,(int)ms.Length); }

    Read the article

  • Handling very large lists of objects without paging?

    - by user246114
    Hi, I have a class which can contain many small elements in a list. Looks like: public class Farm { private ArrayList<Horse> mHorses; } just wondering what will happen if the mHorses array grew to something crazy like 15,000 elements. I'm assuming that trying to write and read this from the datastore would be crazy, because I'd get killed on the serialization process. It's important that I can get the entire array in one shot without paging, and each Horse element may only have two string properties in it, so they are pretty lightweight: public class Horse { private String mId; private String mName; } I don't need these horses indexed at all. Does it sound reasonable to just store the mHorse array as a raw Text field, and force my clients to do the deserialization? Something like: public class Farm { private Text mHorsesSerialized; } then whenever the client receives a Farm instance, it has to take the raw string of horses, and split it in order to reinstantiate the list, something like: // GWT client perhaps Farm farm = rpcCall.getMyFarm(); String horsesSerialized = farm.getHorses(); String[] horseBlocks = horsesSerialized.split(","); for (int i = 0; i < horseBlocks.length; i++) { // .. continue deserializing the individual objects ... } yeah... so hopefully it would be quick to read a Farm instance from the datastore, and the serialization penalty is paid by the client, Thanks

    Read the article

  • Which options do I have for Java process communication?

    - by Dmitriy Matveev
    We have a place in a code of such form: void processParam(Object param) { wrapperForComplexNativeObject result = jniCallWhichMayCrash(param); processResult(result); } processParam - method which is called with many different arguments. jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases. wrapperForComplexNativeObject - wrapper type generated by SWIG processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects: 1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them. 2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind. After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?

    Read the article

  • Advice on Minimizing Stored Procedure Parameters

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC Web Application that interacts with a SQL Server 2008 database via Entity Framework 4.0. On a particular page, i call a stored procedure in order to pull back some results based on selections on the UI. Now, the UI has around 20 different input selections, ranging from a textbox, dropdown list, checkboxes, etc. Each of those inputs are "grouped" into logical sections. Example: Search box : "Foo" Checkbox A1: ticked, Checkbox A2: unticked Dropdown A: option 3 selected Checkbox B1: ticked, Checkbox B2: ticked, Checkbox B3: unticked So i need to call the SPROC like this: exec SearchPage_FindResults @SearchQuery = 'Foo', @IncludeA1 = 1, @IncludeA2 = 0, @DropDownSelection = 3, @IncludeB1 = 1, @IncludeB2 = 1, @IncludeB3 = 0 The UI is not too important to this question - just wanted to give some perspective. Essentially, i'm pulling back results for a search query, filtering these results based on a bunch of (optional) selections a user can filter on. Now, My questions/queries: What's the best way to pass these parameters to the stored procedure? Are there any tricks/new ways (e.g SQL Server 2008) to do this? Special "table" parameters/arrays - can we pass through User-Defined-Types? Keep in mind im using Entity Framework 4.0 - but could always use classic ADO.NET for this if required. What about XML? What are the serialization/de-serialization costs here? Is it worth it? How about a parameter for each logical section? Comma-seperated perhaps? Just thinking out loud. This page is particulary important from a user point of view, and needs to perform really well. The stored procedure is already heavy in logic, so i want to minimize the performance implications - so keep that in mind. With that said - what is the best approach here?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >