Search Results

Search found 9923 results on 397 pages for 'node mongodb native'.

Page 43/397 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • How to sketch out an event-driven system?

    - by Jordan
    I'm trying to design a system in Node.js (an attempt at solving one of my earlier problems, using Node's concurrency) but I'm running into trouble figuring out how to draw a plan of how the thing should operate. I'm getting very tripped up thinking in terms of callbacks instead of returned values. The flow isn't linear, and it's really boggling my ability to draft it. How does one draw out an operational flow for an event-driven system? I need something I can look at and say "Ok, yes, that's how it will work. I'll start it over here, and it will give me back these results over here." Pictures would be very helpful for this one. Thanks. Edit: I'm looking for something more granular than UML, specifically, something that will help me transition from a blocking and object-oriented programming structure, where I'm comfortable, to a non-blocking and event driven structure, where I'm unfamiliar.

    Read the article

  • Can't require local CoffeeScript modules

    - by superlukas
    I'm running Node.js 0.10.21. I tried both CoffeeScript 1.6.3 and master both with and without require('coffee-script/extensions'). Compiling the two files to JavaScript and running them directly in Node works just fine of course. # ./folder/a.coffee require('../b').test() # ./b.coffee exports.test = -> console.log 'yay' # $ coffee folder/a.coffee # # Error: Cannot find module '../b' # at Function.Module._resolveFilename (module.js:338:15) # at Function.Module._load (module.js:280:25) # at Module.require (module.js:364:17) # at require (module.js:380:17) # at Object.<anonymous> (/Users/test/folder/a.coffee:1:1) # at Module._compile (module.js:456:26)

    Read the article

  • event vs thread programming on server side.

    - by AlxPeter
    We are planning to start a fairly complex web-portal which is expected to attract good local traffic and I've been told by my boss to consider/analyse node.js for the serve side. I think scalability and multi-core support can be handled with an Nginx or Cherokee up in the front. 1) Is this node.js ready for some serious/big business? 2) Does this 'event/asynchronous' paradigm on server side has the potential to support the heavy traffic and data operation ? considering the fact that 'everything' is being processed in a single thread and all the live connections would be lost if it got crashed (though its easy to restart). 3) What are the advantages of event based programming compared to thread based style ? or vice-versa. (I know of higher cost associated with thread switching but hardware can be squeezed with event model.) Following are interesting but contradicting (to some extent) papers:- 1) http://www.usenix.org/events/hotos03/tech/full_papers/vonbehren/vonbehren_html 2) http://pdos.csail.mit.edu/~rtm/papers/dabek:event.pdf

    Read the article

  • c++ protected pointer member to the same class and access privileges

    - by aajmakin
    Hi, Example code is included at the bottom of the message. I'm puzzled about the protected access specifier in a class. I have define a class node which has a protected string member name string name; and a vector of node pointers vector args; Before I thought that a member function of node could not do args[0]-name but a program that does just this does compile and run. However, now I would like to inherit this class and access the name field in one of the args array pointers from this derived class args[0]-name but this does not compile. When I compile the example code below with the commented sections uncommented, the compiler reports: Compiler output: g++ test.cc -o test test.cc: In member function 'void foo::newnode::print_args2()': test.cc:22: error: 'std::string foo::node::name' is protected test.cc:61: error: within this context Compilation exited abnormally with code 1 at Thu Jun 17 12:40:12 Questions: Why can I access the name field of the node pointers in args in class node, because this is what I would excpect from a similarly defined private field in Java. How can I access those fields from the derived class. Example code: #include <iostream> #include <vector> using namespace std; namespace foo { class node; typedef std::vector<node*> nodes; class node { public: node (string _name); void print_args (); void add_node (node* a); protected: nodes args; string name; }; } foo::node::node (string _name) : args(0) { name = _name; } void foo::node::add_node (node* a) { args.push_back(a); } void foo::node::print_args () { for (int i = 0; i < args.size(); i++) { cout << "node " << i << ": " << args[i]->name << endl; } } // namespace foo // { // class newnode : public node // { // public: // newnode (string _name) : node(_name) {} // void print_args2 (); // protected: // }; // } // void foo::newnode::print_args2 () // { // for (int i = 0; i < args.size(); i++) // { // cout << "node " << i << ": " << args[i]->name << endl; // } // } int main (int argc, char** argv) { foo::node a ("a"); foo::node b ("b"); foo::node c ("c"); a.add_node (&b); a.add_node (&c); a.print_args (); // foo::newnode newa ("newa"); // foo::newnode newb ("newb"); // foo::newnode newc ("newc"); // newa.add_node (&newb); // newa.add_node (&newc); // newa.print_args2 (); return 0; }

    Read the article

  • How do you convert a string to a node in XQuery?

    - by Sixty4Bit
    I would like to convert a string into a node. I have a method that is defined to take a node, but the value I have is a string (it is hard coded). How do I turn that string into a node? So, given an XQuery method: define function foo($bar as node()*) as node() { (: unimportant details :) } I have a string that I want to pass to the foo method. How do I convert the string to a node so that the method will accept the string.

    Read the article

  • Is RAC One Node Certified for E-Business Suite?

    - by Steven Chan
    Oracle Real Application Clusters (RAC) is a cluster database with a shared cache architecture that supports the transparent deployment of a single database across a pool of servers.  RAC is certified with both Oracle E-Business Suite Release 11i and 12.  We publish best-practices documentation for specific combinations of EBS + RAC versions.  For example, if you were planning on implementing RAC for EBS 12, you would use this documentation:Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Note 823587.1)Many of the largest E-Business Suite users in the world run RAC today, including Oracle; see this Oracle R12 case study for details.A number of customers have recently asked whether RAC One Node can be used with the E-Business Suite.  From the RAC website:Oracle RAC One Node is a new option available with Oracle Database 11g Release 2. Oracle RAC One Node is a single instance of an Oracle RAC-enabled database running on one node in a cluster.

    Read the article

  • Oracle on NFS vmdk beats native NFS!?

    - by fletch00
    Hi, my colleagues are pursuing this with Netapp and Oracle - but I thought I'd post here on the off chance someone else has seen this We have a RedHat 5 VM (fully up2date) running Oracle 11i with data disks mounted via the VM's linux kernel NFS using Oracle's recommended mount options and the performance is very inconsistent (Querys that should take < 2 seconds sometimes take 60 seconds) Funny thing is we can run the same queries perfectly consistently < 2 seconds on a VMDK residing on SAME NetApp NFS datastore! Makes me wish Oracle and NetApp collaborated as closely as VMware and NetApp did on the Virtual Storage Console we used to perfectly set the NFS options and keep them in compliance... We have tried a few Linux NFS options others have posted and not seen improvement so far. We are now creating VMDK's for the VM to replace the Linux NFS mounted and workaround the issue as our developers need consistent performance ASAP.

    Read the article

  • Gmail notification in Ubuntu's native mail icon.

    - by D Connors
    So, new Ubuntu 10.04 up and running. Playing with the interface eventually got me curious about something. Up until now, I've only used the chat and broadcasting functionalities of the mail icon in Ubuntu's top panel, mostly because I take care of my mail on firefox so I don't need one more program (empathy) running on my poor little netbook. I'm currenty using gnome-gmail to set gmail as my default mail client in Ubuntu. But, when I click on the "Set up mail" item, inside the Panel's Mail icon, it simply brings up empathy. My question is: Is it possible (in any way whatsoever) to have my gmail notifications up there, configured so that clicking it will just take me to a gmail tab in Firefox? (instead of opening up empathy) This is really just some frivolous interest of mine, nothing important. Right now I'm using check-gmail to notify me of new mail, but the panel is really cramped up on my netbook, so it'd be nice to free up some room by getting rid of check-gmail and only using Ubuntu's mail icon. Ok, hope I was clear enough.

    Read the article

  • Looking for zsh completion file for osX native commands

    - by Chiggsy
    I've been digging deep into what actually comes with osX in /usr/bin and especially /usr/libexec. Quite good stuff really, although the command syntax is a bit.. odd. Let me direct the curious to the command that made me think of this: networksetup -printcommands I can not think of a command that better illustrates the need for good completion. security -h perhaps, but those commands have a familiar easy-to-read format. I beseech the community, please point me to a place where I can find such a thing. I never type them right, and I ache for tab completion for this Anyone have any idea where I could grab something? I'd prefer to stand on the shoulders of giants instead of trying to make a zsh/bash completion script leap into the world, ready for battle, like Athena, from my forehead. I am no Zeus when it comes to compctl. Not at all.

    Read the article

  • Using powershell call native command-line app and capture STDERR

    - by crtracy
    I'm using a port of a cygwin tool on Windows which writes normal status messages to STRERR. This produces ugly output when run from PowerShell: PS> dos2unix.exe -n StartApp.sh StartApp_fixed.sh dos2unix.exe : dos2unix: converting file StartEC3.sh to file StartEC3_fixed.sh in UNIX format ... At line:1 char:13 + dos2unix.exe <<<< -n StartApp.sh StartApp_fixed.sh + CategoryInfo : NotSpecified: (dos2unix: conve...UNIX format ...:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Is there a better way? P.S. I intend to post one solution I've found and compare it to answers from others.

    Read the article

  • Disable of discrete/native graphics on MacBook Retina when using apps like VMware, VLC, etc

    - by badkitteh
    I got a MacBook Pro Retina a few months ago and I'm really happy with it. However, since I do a lot of work in different environments/OSes, I make heavy use of VMware to have them with me when I'm on the road. The MBPr has a great battery life - as long as integrated graphics are being used. Unfortunately as soon as I launch VMware, the MacBook switches to discrete graphics and battery life is effectively halved. I noticed that after installing gfxCardStatus, and now I'm wondering if there is a way to force the integrated graphics being used all of the time, so I can enjoy maximum battery life. Thanks.

    Read the article

  • Safe BIOS upgrade run from external USB-HDD and not native OS

    - by ecoologic
    I have a Toshiba M200 laptop which came with Windows Vista. After buying it, I replaced the OS with XP and recently I swapped the internal hard disk with a bigger one where I only have Ubuntu. So now I can boot XP from the external USB hard disk. There's a BIOS update available (2.3 against my 1.8) which I'd like to install and there's also a version for XP. Is it "reasonably" safe to install this upgrade for Windows XP (despite the original OS was Vista, the laptop model is the right one) from my external USB hard disk with Win XP?

    Read the article

  • Wanderlust or other imap mail reader on native windows emacs

    - by radekg
    Hello, I'm trying to get my windows based emacs to handle mails. Is there any emacs based mail reader that would run on windows? By running, I mean fetch from imap, show and reply to mails without external applications. I've heard many good things about Wanderlust but the webpage suggests it is not supported. Any suggestions? Thanks!

    Read the article

  • Can't run monitor at native 1080p resolution on Windows 7

    - by Rex
    I have a 24" ViewSonic VX2453 monitor that supports 1920x1080p, connected using HDMI. However at that resolution, the desktop goes off the screen. Using the Nvidia control panel, I have to set it to a custom resolution of 1804x1014 to display correctly. The monitor has its drivers properly installed (the correct model name shows up in control panel after installing the drivers), and I'm running 64 bit Win7 Ultimate. I have a GeForce 560 Ti card, if that helps. Why does this happen?

    Read the article

  • Deciding to use VM or native install for new hardware

    - by Billy Moon
    I have a Ubuntu 10.10 installation running on hardware. I upgraded the hardware, and am planning to move the system over. Whilst reading the many various ways to do this, I came across tools for making a virtual machine out of a hardware installation. I think this might make managing my server easier in the future if I run it as a virtual machine. Also, I will be able to easily split responsibilities of my server, for example running MySQL on a separate virtual machine hosted on the same physical machine. Is it a good idea to install my production server as a virtual machine inside another thin server installation? What are the pros/cons and pitfalls?

    Read the article

  • Amazon sort le SDK AWS pour Node.js, un kit de développement open source qui facilite l'accès à ses solutions Cloud

    Amazon sort AWS SDK pour Node.js un kit de développement open source qui facilite l'accès à ses solutions Cloud aux développeurs Node.js Voila une nouvelle qui va certainement ravir les développeurs JavaScript utilisant Node.js. La team AWS (Amazon Web Services) Developer Tools vient de publier un kit de développement qui permettra aux développeurs Node.js d'accéder facilement à sa plateforme de Cloud Computing. Ce nouveau SDK permettra d'exploiter avec souplesse les fonctionnalités des solutions d'Amazon Web Services, notamment la plateforme de stockage Cloud Amazon S3 (Simple Storage Service), Amazon EC2 (Elastic Compute Cloud), ainsi qu'Amazon DynamoDB. Le SDK est...

    Read the article

  • MongoMapper and migrations

    - by Clint Miller
    I'm building a Rails application using MongoDB as the back-end and MongoMapper as the ORM tool. Suppose in version 1, I define the following model: class SomeModel include MongoMapper::Document key :some_key, String end Later in version 2, I realize that I need a new required key on the model. So, in version 2, SomeModel now looks like this: class SomeModel include MongoMapper::Document key :some_key, String key :some_new_key, String, :required => true end How do I migrate all my existing data to include some_new_key? Assume that I know how to set a reasonable default value for all the existing documents. Taking this a step further, suppose that in version 3, I realize that I really don't need some_key at all. So, now the model looks like this class SomeModel include MongoMapper::Document key :some_new_key, String, :required => true end But all the existing records in my database have values set for some_key, and it's just wasting space at this point. How do I reclaim that space? With ActiveRecord, I would have just created migrations to add the initial values of some_new_key (in the version1 - version2 migration) and to delete the values for some_key (in the version2 - version3 migration). What's the appropriate way to do this with MongoDB/MongoMapper? It seems to me that some method of tracking which migrations have been run is still necessary. Does such a thing exist? EDITED: I think people are missing the point of my question. There are times where you want to be able to run a script on a database to change or restructure the data in it. I gave two examples above, one where a new required key was added and one where a key can be removed and space can be reclaimed. How do you manage running these scripts? ActiveRecord migrations give you an easy way to run these scripts and to determine what scripts have already been run and what scripts have not been run. I can obviously write a Mongo script that does any update on the database, but what I'm looking for is a framework like migrations that lets me track which upgrade scripts have already been run.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Use cases for NoSQL

    - by seengee
    NoSQL has been getting a lot of attention in the industry recently. Really interested in peoples thoughts on the best use-cases for its use over relational database storage. What should trigger a developer into thinking that particular datasets are more suited to a NoSQL solution like Redis/CouchDB/MongoDB/Cassandra etc. Would also be really interested to hear what people have ported from relational db's to NoSQL and what improvements they have seen.

    Read the article

  • Clojure and NoSQL databases

    - by Mad Wombat
    I am currently trying to pick between different NoSQL databases for my project. The project is being written in clojure and javascript. I am currently looking at three candidates for storage. What are the relative strengths and weaknesses of MongoDB, FleetDB and CouchDB? Which one is better supported in Clojure? Which one is better supported under Linux? Did I miss a better product (has to be free and OSS)?

    Read the article

  • TG2.1: Proper location to store a database session instance?

    - by Lee Olayvar
    I am using a custom database (MongoDB) with TG 2.1 and i am wondering where the proper place to store the PyMongo connection/database instances would be? Eg, at the moment they are getting created inside of my inherited instance of AppConfig. Is there a standard location to store this? Would shoving the variables into the project.model.__init__ be the best location, given that under SQLAlchemy, the database seems to commonly be retrieved via: from project.model import DBSession, metadata Anyway, just curious what the best practice is.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >