Search Results

Search found 13526 results on 542 pages for 'distributed objects'.

Page 8/542 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • LinQ to objects GroupBy() by object and Sum() by amount

    - by Daniil Harik
    Hello, I have pretty simple case which I started solving using foreach(), but then I thought I could do It using Linq Basically I have IList that contains PaymentTransaction objects and there are 2 properties Dealer and Amount I want to GroupBy() by Dealer and Sum() bv amount. I tried to accomplish this using following code, but unfortunately it does not work var test = paymentTransactionDao.GetAll().GroupBy(x => x.Dealer).Sum(x => x.Amount); Want exactly I'm doing wrong here? I'm sorry if this question is too simple. Thank You

    Read the article

  • Ninject - initialise objects

    - by James Lin
    Hi guys, I am new to ninject, I am wondering how I can run custom initizlisation code when constructing the injected objects? ie. I have a Sword class which implements IWeapon, but I want to pass an hit point value to the Sword class constructor, how do I achieve that? Do I need to write my own provider? A minor question, IKernel kernel = new StandardKernel(new Module1(), new Module2(), ...); what is the actual use of having multiple modules in Kernel? I sorta understand it, but could someone give me a formal explaination and use case? Thanks a lot! James

    Read the article

  • objective-C : Reset tableview loaded with feltching objects (core data)

    - by the1nz4ne
    hi, i have a tableview application loaded with core data feltching objects and i wanna know if it is possible to reset the table with a simple button. Thanks code to add an object : NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; NSEntityDescription *entity = [[fetchedResultsController fetchRequest] entity]; NSManagedObject *newManagedObject = [NSEntityDescription insertNewObjectForEntityForName:[entity name] inManagedObjectContext:context]; [newManagedObject setValue:string forKey:@"timeStamp"]; my code to delete (one) object: NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; [context deleteObject:[fetchedResultsController objectAtIndexPath:indexPath]]; i want a button that reset the tableview and delete everything thanks

    Read the article

  • initialization of objects in c++

    - by Happy Mittal
    I want to know, in c++, when does the initialization of objects take place? Is it at the compile time or link time? For ex: //file1.cpp extern int i; int j=5; //file2.cpp ( link with file1.cpp) extern j; int i=10; Now, what does compiler do : according to me, it allocates storage for variables. Now I want to know : does it also put initialization value in that storage or is it done at link time?

    Read the article

  • Distributed computing for a company? Is there such a 'free' thing?

    - by Jakub
    I am new to the whole distributed computing / cloud thing. But I had an idea at work for our multimedia stuff like movie encoding / cpu intensive things tasks (which sometimes take a few hours). Is there a 'free' (linux?) way to go about using a Windows machine, and offsetting those cpu cycles for that task to say 10 servers that are generally idle (cpu wise)? I'm just curious if there is a way to do this or am I just grasping at straws here. My thought is that a 'cloud' setup would achieve this, however like I stated initially, I am a total newbie when it comes to it. This is just an idea, looking for some thoughts? Anyone achieve this?

    Read the article

  • When is LINQ (to objects) Overused?

    - by Mystagogue
    My career started as a hard-core functional-paradigm developer (LISP), and now I'm a hard-care .net/C# developer. Of course I'm enamored with LINQ. However, I also believe in (1) using the right tool for the job and (2) preserving the KISS principle: of the 60+ engineers I work with, perhaps only 20% have hours of LINQ / functional paradigm experience, and 5% have 6 to 12 months of such experience. In short, I feel compelled to stay away from LINQ unless I'm hampered in achieving a goal without it (wherein replacing 3 lines of O-O code with one line of LINQ is not a "goal"). But now one of the engineers, having 12 months LINQ / functional-paradigm experience, is using LINQ to objects, or at least lambda expressions anyway, in every conceivable location in production code. My various appeals to the KISS principle have not yielded any results. Therefore... What published studies can I next appeal to? What "coding standard" guideline have others concocted with some success? Are there published LINQ performance issues I could point out? In short, I'm trying to achieve my first goal - KISS - by indirect persuasion. Of course this problem could be extended to countless other areas (such as overuse of extension methods). Perhaps there is an "uber" guide, highly regarded (e.g. published studies, etc), that takes a broader swing at this. Anything?

    Read the article

  • Qt 4.6 Adding objects and sub-objects to QWebView window object (C++ & Javascript)

    - by Cor
    I am working with Qt's QWebView, and have been finding lots of great uses for adding to the webkit window object. One thing I would like to do is nested objects... for instance: in Javascript I can... var api = new Object; api.os = new Object; api.os.foo = function(){} api.window = new Object(); api.window.bar = function(){} obviously in most cases this would be done through a more OO js-framework. This results in a tidy structure of: >>>api ------------------------------------------------------- - api Object {os=Object, more... } - os Object {} foo function() - win Object {} bar function() ------------------------------------------------------- Right now I'm able to extend the window object with all of the qtC++ methods and signals I need, but they all have 'seem' to have to be in a root child of "window". This is forcing me to write a js wrapper object to get the hierarchy that I want in the DOM. >>>api ------------------------------------------------------- - api Object {os=function, more... } - os_foo function() - win_bar function() ------------------------------------------------------- This is a pretty simplified example... I want objects for parameters, etc... Does anyone know of a way to pass an child object with the object that extends the WebFrame's window object? Here's some example code of how I'm adding the object: mainwindow.h #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QtGui/QMainWindow> #include <QWebFrame> #include "mainwindow.h" #include "happyapi.h" class QWebView; class QWebFrame; QT_BEGIN_NAMESPACE class MainWindow : public QMainWindow { Q_OBJECT public: MainWindow(QWidget *parent = 0); private slots: void attachWindowObject(); void bluesBros(); private: QWebView *view; HappyApi *api; QWebFrame *frame; }; #endif // MAINWINDOW_H mainwindow.cpp #include <QDebug> #include <QtGui> #include <QWebView> #include <QWebPage> #include "mainwindow.h" #include "happyapi.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) { view = new QWebView(this); view->load(QUrl("file:///Q:/example.htm")); api = new HappyApi(this); QWebPage *page = view->page(); frame = page->mainFrame(); attachWindowObject(); connect(frame, SIGNAL(javaScriptWindowObjectCleared()), this, SLOT(attachWindowObject())); connect(api, SIGNAL(win_bar()), this, SLOT(bluesBros())); setCentralWidget(view); }; void MainWindow::attachWindowObject() { frame->addToJavaScriptWindowObject(QString("api"), api); }; void MainWindow::bluesBros() { qDebug() << "foo and bar are getting the band back together!"; }; happyapi.h #ifndef HAPPYAPI_H #define HAPPYAPI_H #include <QObject> class HappyApi : public QObject { Q_OBJECT public: HappyApi(QObject *parent); public slots: void os_foo(); signals: void win_bar(); }; #endif // HAPPYAPI_H happyapi.cpp #include <QDebug> #include "happyapi.h" HappyApi::HappyApi(QObject *parent) : QObject(parent) { }; void HappyApi::os_foo() { qDebug() << "foo called, it want's it's bar back"; }; I'm reasonably new to C++ programming (coming from a web and python background). Hopefully this example will serve to not only help other new users, but be something interesting for a more experienced c++ programmer to elaborate on. Thanks for any assistance that can be provided. :)

    Read the article

  • How to return array of C++ objects from a PHP extension

    - by John Factorial
    I need to have my PHP extension return an array of objects, but I can't seem to figure out how to do this. I have a Graph object written in C++. Graph.getNodes() returns a std::map<int, Node*>. Here's the code I have currently: struct node_object { zend_object std; Node *node; }; zend_class_entry *node_ce; then PHP_METHOD(Graph, getNodes) { Graph *graph; GET_GRAPH(graph, obj) // a macro I wrote to populate graph node_object* n; zval* node_zval; if (obj == NULL) { RETURN_NULL(); } if (object_init_ex(node_zval, node_ce) != SUCCESS) { RETURN_NULL(); } std::map nodes = graph-getNodes(); array_init(return_value); for (std::map::iterator i = nodes.begin(); i != nodes.end(); ++i) { php_printf("X"); n = (node_object*) zend_object_store_get_object(node_zval TSRMLS_CC); n-node = i-second; add_index_zval(return_value, i-first, node_zval); } php_printf("]"); } When i run php -r '$g = new Graph(); $g->getNodes();' I get the output XX]Segmentation fault meaning the getNodes() function loops successfully through my 2-node list, returns, then segfaults. What am I doing wrong?

    Read the article

  • Javascript function objects, this keyword points to wrong object

    - by Rody van Sambeek
    I've got a problem concerning the javascript "this" keyword when used within a javascript functional object. I want to be able to create an object for handling a Modal popup (JQuery UI Dialog). The object is called CreateItemModal. Which i want to be able to instantiate and pass some config settings. One of the config settings. When the show method is called, the dialog will be shown, but the cancel button is not functioning because the this refers to the DOM object instead of the CreateItemModal object. How can I fix this, or is there a better approach to put seperate behaviour in seperate "classes" or "objects". I've tried several approaches, including passing the "this" object into the events, but this does not feel like a clean solution. See (simplified) code below: function CreateItemModal(config) { // initialize some variables including $wrapper }; CreateItemModal.prototype.show = function() { this.$wrapper.dialog({ buttons: { // this crashes because this is not the current object here Cancel: this.close } }); }; CreateItemModal.prototype.close = function() { this.config.$wrapper.dialog('close'); };

    Read the article

  • C++ trouble with pointers to objects

    - by Zibd
    I have a class with a vector of pointers to objects. I've introduced some elements on this vector, and on my main file I've managed to print them and add others with no problems. Now I'm trying to remove an element from that vector and check to see if it's not NULL but it is not working. I'm filling it with on class Test: Other *a = new Other(1,1); Other *b = new Other(2,2); Other *c = new Other(3,3); v->push_back(a); v->push_back(b); v->push_back(c); And on my main file I have: Test t; (...) Other *pointer = t.vect->at(0); delete t.vect->at(0); t.vect->erase(t.vect->begin()); if (pointer == NULL) { cout << "Nothing here.."; } // Never enters here..

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • Finding diagonal objects of an object in 3d space

    - by samfisher
    Using Unity3d, I have a array which is having 8 GameObjects in grid and one object (which is already known) is in center like this where K is already known object. All objects are equidistant from their adjacent objects (even with the diagonal objects) which means (distance between 4 & K) == (distance between K & 3) = (distance between 2 & K) 1 2 3 4 K 5 6 7 8 I want to remove 1,3,6,8 from array (the diagonal objects). How can I check that at runtime? my problem is the order of objects {1-8} is not known so I need to check each object's position with K to see if it is a diagonal object or not. so what check should I put with the GameObjects (K and others) to verify if this object is in diagonal position Regards, Sam

    Read the article

  • Pooling (Singleton) Objects Against Connection Pools

    - by kolossus
    Given the following scenario A canned enterprise application that maintains its own connection pool A homegrown client application to the enterprise app. This app is built using Spring framework, with the DAO pattern While I may have a simplistic view of this, I think the following line of thinking is sound: Having a fixed pool of DAO objects, holding on to connection objects from the pool. Clearly, the pool should be capable of scaling up (or down depending on need) and the connection objects must outnumber the DAOs by a healthy margin. Good Instantiating brand new DAOs for every request to access the enterprise app; each DAO will attempt to grab a connection from the pool and release it when it's done. Bad Since these are service objects, there will be no (mutable) state held by the objects (reduced risk of concurrency issues) I also think that with #1, there should be little to no resource contention, while in #2, there'll almost always be a DAO waiting to be serviced. Is my thinking correct and what could go wrong?

    Read the article

  • Memory management with Objective-C Distributed Objects: my temporary instances live forever!

    - by jkp
    I'm playing with Objective-C Distributed Objects and I'm having some problems understanding how memory management works under the system. The example given below illustrates my problem: Protocol.h #import <Foundation/Foundation.h> @protocol DOServer - (byref id)createTarget; @end Server.m #import <Foundation/Foundation.h> #import "Protocol.h" @interface DOTarget : NSObject @end @interface DOServer : NSObject < DOServer > @end @implementation DOTarget - (id)init { if ((self = [super init])) { NSLog(@"Target created"); } return self; } - (void)dealloc { NSLog(@"Target destroyed"); [super dealloc]; } @end @implementation DOServer - (byref id)createTarget { return [[[DOTarget alloc] init] autorelease]; } @end int main() { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; DOServer *server = [[DOServer alloc] init]; NSConnection *connection = [[NSConnection new] autorelease]; [connection setRootObject:server]; if ([connection registerName:@"test-server"] == NO) { NSLog(@"Failed to vend server object"); } else [[NSRunLoop currentRunLoop] run]; [pool drain]; return 0; } Client.m #import <Foundation/Foundation.h> #import "Protocol.h" int main() { unsigned i = 0; for (; i < 3; i ++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; id server = [NSConnection rootProxyForConnectionWithRegisteredName:@"test-server" host:nil]; [server setProtocolForProxy:@protocol(DOServer)]; NSLog(@"Created target: %@", [server createTarget]); [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; [pool drain]; } return 0; } The issue is that any remote objects created by the root proxy are not released when their proxy counterparts in the client go out of scope. According to the documentation: When an object’s remote proxy is deallocated, a message is sent back to the receiver to notify it that the local object is no longer shared over the connection. I would therefore expect that as each DOTarget goes out of scope (each time around the loop) it's remote counterpart would be dellocated, since there is no other reference to it being held on the remote side of the connection. In reality this does not happen: the temporary objects are only deallocate when the client application quits, or more accurately, when the connection is invalidated. I can force the temporary objects on the remote side to be deallocated by explicitly invalidating the NSConnection object I'm using each time around the loop and creating a new one but somehow this just feels wrong. Is this the correct behaviour from DO? Should all temporary objects live as long as the connection that created them? Are connections therefore to be treated as temporary objects which should be opened and closed with each series of requests against the server? Any insights would be appreciated.

    Read the article

  • Resetting Objects vs. Constructing New Objects

    - by byronh
    Is it considered better practice and/or more efficient to create a 'reset' function for a particular object that clears/defaults all the necessary member variables to allow for further operations, or to simply construct a new object from outside? I've seen both methods employed a lot, but I can't decide which one is better. Of course, for classes that represent database connections, you'd have to use a reset method rather than constructing a new one resulting in needless connecting/disconnecting, but I'm talking more in terms of abstraction classes. Can anyone give me some real-world examples of when to use each method? In my particular case I'm thinking mostly in terms of ORM or the Model in MVC. For example, if I would want to retrieve a bunch of database objects for display and modify them in one operation.

    Read the article

  • Casting objects in C# (ASP.Net MVC)

    - by Mortanis
    I'm coming from a background in ColdFusion, and finally moving onto something modern, so please bear with me. I'm running into a problem casting objects. I have two database tables that I'm using as Models - Residential and Commercial. Both of them share the majority of their fields, though each has a few unique fields. I've created another class as a container that contains the sum of all property fields. Query the Residential and Commercial, stuff it into my container, cunningly called Property. This works fine. However, I'm having problems aliasing the fields from Residential/Commercial onto Property. It's quite easy to create a method for each property: fillPropertyByResidential(Residential source) and fillPropertyByCommercial(Commercial source), and alias the variables. That also works fine, but quite obviously will copy a bunch of code - all those fields that are shared between the two main Models. So, I'd like a generic fillPropertyBySource() that takes the object, and detects if it's Residential or Commercial, fills the particular fields of each respective type, then do all the fields in common. Except, I gather in C# that variables created inside an If are only in the scope of the if, so I'm not sure how to do this. public property fillPropertyBySource(object source) { property prop = new property(); if (source is Residential) { Residential o = (Residential)source; //Fill Residential only fields } else if (source is Commercial) { Commercial o = (Commercial)source; //Fill Commercial only fields } //Fill fields shared by both prop.price = (int)o.price; prop.bathrooms = (float)o.bathrooms; return prop; } "o" being a Commercial or Residential only exists within the scope of the if. How do I detect the original type of the source object and take action? Bear with me - the shift from ColdFusion into a modern language is pretty..... difficult. More so since I'm used to procedural code and MVC is a massive paradigm shift. Edit: I should include the error: The name 'o' does not exist in the current context For the aliases of price and bathrooms in the shared area.

    Read the article

  • Synchronizing a collection of wrapped objects with a collection of unwrapped objects

    - by Kenneth Cochran
    I have two classes: Employee and EmployeeGridViewAdapter. Employee is composed of several complex types. EmployeeGridViewAdapter wraps a single Employee and exposes its members as a flattened set of system types so a DataGridView can handle displaying, editing, etc. I'm using VS's builtin support for turning a POCO into a data source, which I then attach to a BindingSource object. When I attach the DataGridView to the BindingSource it creates the expected columns and at runtime I can perform the expected CRUD operations. All is good so far. The problem is the collection of adapters and the collection of employees aren't being synchronized. So all the employees I create an runtime never get persisted. Here's a snippet of the code that generates the collection of EmployeeGridViewAdapter's: var employeeCollection = new List<EmployeeGridViewAdapter>(); foreach (var employee in this.employees) { employeeCollection.Add(new EmployeeGridViewAdapter(employee)); } this.view.Employees = employeeCollection; Pretty straight forward but I can't figure out how to synchronize changes back to the original collection. I imagine edits are already handled because both collections reference the same objects but creating new employees and deleting employees aren't happening so I can't be sure.

    Read the article

  • Select those objects whose related objects IDs are *all* in given string

    - by Jannis
    Hi Django people, I want to build a frontend to a recipe database which enables the user to search for a list of recipes which are cookable with the ingredients the user supplies. I have the following models class Ingredient(models.Model): name = models.CharField(max_length=100, unique=True) slug = models.SlugField(max_length=100, unique=True) importancy = models.PositiveSmallIntegerField(default=4) […] class Amount(models.Model): recipe = models.ForeignKey('Recipe') ingredient = models.ForeignKey(Ingredient) […] class Rezept(models.Model): name = models.CharField(max_length=100) slug = models.SlugField() instructions = models.TextField() ingredients = models.ManyToManyField(Ingredient, through=Amount) […] and a rawquery which does exactly what I want: It gets all the recipes whose required ingredients are all contained in the list of strings that the user supplies. If he supplies more than necessary, it's fine too. query = "SELECT *, COUNT(amount.zutat_id) AS selected_count_ingredients, (SELECT COUNT(*) FROM amount WHERE amount.recipe_id = amount.id) AS count_ingredients FROM amount LEFT OUTER JOIN amount ON (recipe.id = recipe.recipe_id) WHERE amount.ingredient_id IN (%s) GROUP BY amount.id HAVING count_ingredient=selected_count_ingredient" % ",".join([str(ingredient.id) for ingredient in ingredients]) rezepte = Rezept.objects.raw(query) Now, what I'm looking for is a way that does not rely on .raw() as I would like to do it purely with Django's queryset methods. Additionally, it would be awesome if you guys knew a way of including the ingredient's importancy in the lookup so that a recipe is still shown as a result even though one of its ingredients (that has an importancy of 0) is not supplied by the user.

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • Jini : single server with multiple clients

    - by user200340
    Hi all, I have a question about how to make multiple clients can access a single file located on server side and keep the file consistent. I have a simple PhoneBook server-client Jini program running at the moment, and server only provides some getter functions to clients, such as getName(String number), getNumber(String name) from a PhoneBook class(serializable), phonebook data are stored in a text file (phonebook.txt) at the moment. I have tried to implement some functions allowing to write a new records into the phonebook.txt file. If the writing record (name) is existing, an integer number will be added into the writing record. for example the existing phonebook.txt is .... John 01-01010101 .... if the writing record is "John 01-12345678",then "John_1 01-12345678" will be writen into phonebook.txt However, if i start with two clients A and B (on the same machine using localhost), and A tries to write "John 01-11111111", B tries to write "John 01-22222222". The early record will be overwritten later record. So, there must be something i did complete wrong. My client and server code are just like Jini HelloWorld example. My server side code is . 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. registrations are saved into a HashTable 4. for every discovered lookup service, i use registrar to register the ServiceItem, ServiceItem contains a null attributeSets, a null serviceId, and a service. The client code has: 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. a ServiceTemplate with null attributeSets, a null serviceId and a type, the type is the interface class. 4. for each found ServiceRegistrar, if it can find the looking for ServiceTemplate, the returned Object is cast into the type of the interface class. I have tried to google more details, and i found JavaSpace could be the one i missed. But i am still not sure about it (i only start Jini for a very short time). So any help would be greatly appreciated.

    Read the article

  • A leader election algorithm for an oriented hypercube

    - by mick
    I'm stuck with some problem where I have to design a leader election algorithm for an oriented hypercube. This should be done by using a tournament with a number of rounds equal to the dimension D of the hypercube. In each stage d, with 1 <= d < D two candidate leaders of neighbouring d-dimensional hypercubes should compete to become the single candidate leader of the (d+1)-dimensional hypercube that is the union of their respective hypercubes.

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Any Open Source Pregel like framework for distributed processing of large Graphs?

    - by Akshay Bhat
    Google has described a novel framework for distributed processing on Massive Graphs. http://portal.acm.org/citation.cfm?id=1582716.1582723 I wanted to know if similar to Hadoop (Map-Reduce) are there any open source implementations of this framework? I am actually in process of writing a Pseudo distributed one using python and multiprocessing module and thus wanted to know if someone else has also tried implementing it. Since public information about this framework is extremely scarce. (A link above and a blog post at Google Research)

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >