Search Results

Search found 10042 results on 402 pages for 'orient db'.

Page 354/402 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • MySQL Stored Procedures : Use a variable as the database name in a cursor declaration

    - by Justin
    I need to use a variable to indicate what database to query in the declaration of a cursor. Here is a short snippet of the code : CREATE PROCEDURE `update_cdrs_lnp_data`(IN dbName VARCHAR(25), OUT returnCode SMALLINT) cdr_records:BEGIN DECLARE cdr_record_cursor CURSOR FOR SELECT cdrs_id, called, calling FROM dbName.cdrs WHERE lrn_checked = 'N'; # Setup logging DECLARE EXIT HANDLER FOR SQLEXCEPTION BEGIN #call log_debug('Got exception in update_cdrs_lnp_data'); SET returnCode = -1; END; As you can see, I'm TRYING to use the variable dbName to indicate in which database the query should occur within. However, MySQL will NOT allow that. I also tried things such as : CREATE PROCEDURE `update_cdrs_lnp_data`(IN dbName VARCHAR(25), OUT returnCode SMALLINT) cdr_records:BEGIN DECLARE cdr_record_cursor CURSOR FOR SET @query = CONCAT("SELECT cdrs_id, called, calling FROM " ,dbName, ".cdrs WHERE lrn_checked = 'N' "); PREPARE STMT FROM @query; EXECUTE STMT; # Setup logging DECLARE EXIT HANDLER FOR SQLEXCEPTION BEGIN #call log_debug('Got exception in update_cdrs_lnp_data'); SET returnCode = -1; END; Of course this doesn't work either as MySQL only allows a standard SQL statement in the cursor declaration. Can anyone think of a way to use the same stored procedure in multiple databases by passing in the name of the db that should be affected?

    Read the article

  • OptimisticLockException in inner transaction ruins outer transaction

    - by Pace
    I have the following code (OLE = OptimisticLockException)... public void outer() { try { middle() } catch (OLE) { updateEntities(); outer(); } } @Transactional public void middle() { try { inner() } catch (OLE) { updateEntities(); middle(); } @Transactional public void inner() { //Do DB operation } inner() is called by other non-transactional methods which is why both middle() and inner() are transactional. As you can see, I deal with OLEs by updating the entities and retrying the operation. The problem I'm having is that when I designed things this way I was assuming that the only time one could get an OLE was when a transaction closed. This is apparently not the case as the call to inner() is throwing an OLE even when the stack is outer()->middle()->inner(). Now, middle() is properly handling the OLE and the retry succeeds but when it comes time to close the transaction it has been marked rollbackOnly by Spring. When the middle() method call finally returns the closing aspect throws an exception because it can't commit a transaction marked rollbackOnly. I'm uncertain what to do here. I can't clear the rollbackOnly state. I don't want to force create a transaction on every call to inner because that kills my performance. Am I missing something or can anyone see a way I can structure this differently? EDIT: To clarify what I'm asking, let me explain my main question. Is it possible to catch and handle OLE if you are inside of an @Transactional method? FYI: The transaction manager is a JpaTransactionManager and the JPA provider is Hibernate.

    Read the article

  • problems with retrieving data that was saved outside of a grails webflow

    - by callie16
    Hi, this is actually connected to an earlier question of mine here. Anyway, I have 3 domains that look like this: class A { ... static hasMany = [ b : B ] ... } class B { ... static belongsTo = [ a : A ] static hasMany = [ c : C ] ... } class C { ... static belongsTo = [ b : B ] ... } In my GSP page, I call an action in the Controller via a remote function in a javascript block (I'm using Dojo so I'm passing data to be saved this way... it's not a form per se so I use JSON for now to pass the data to the Controller). Let's say, I'm calling something like this: def someAction = { def jsonArr = [parse the JSON here] def tmpA = A.get(params.id) ... def tmpB = new B() b.someParam = jsonArr.someParam ... def tmpC = new C() tmpC.cParam = jsonArr.cParam tmpB.addToC(tmpC) tmpB.save(flush: true) //this may or may not be here but I'm adding it for the sake of completeness tmpA.addToB(tmpB) tmpA.save(flush: true) // NOTE: If I check here via println or whatnot, tmpA has a tmpB which has a tmpC... in other words, the data got saved. It's also in the DB. redirect(action: 'order' ...) } Then comes the fun part. Here's the webflow sample: def orderFlow = { ... someStateIShouldEndUpIn { on("next") { // or on previous... doesn't matter def anId = params.id def currA = A.get(anId) // this does NOT return a null value def testB = currA.b // this DOES return a null value }.to("somePage") ... } ... } Any ideas on why this happens? Moreover, when I dump the data of currA, b=null... instead of b=[] or b=[contents of tmpB]. Any help would be seriously appreciated... been at this for a couple of days now... Thanks!

    Read the article

  • Where do objects merge/join data in a 3-tier model?

    - by BerggreenDK
    Its probarbly a simple 3-tier problem. I just want to make sure we use the best practice for this and I am not that familiary with the structures yet. We have the 3 tiers: GUI: ASP.NET for Presentation-layer (first platform) BAL: Business-layer will be handling the logic on a webserver in C#, so we both can use it for webforms/MVC + webservices DAL: LINQ to SQL in the Data-layer, returning BusinessObjects not LINQ. DB: The SQL will be Microsoft SQL-server/Express (havent decided yet). Lets think of setup where we have a database of [Persons]. They can all have multiple [Address]es and we have a complete list of all [PostalCode] and corresponding citynames etc. The deal is that we have joined a lot of details from other tables. {Relations}/[tables] [Person]:1 --- N:{PersonAddress}:M --- 1:[Address] [Address]:N --- 1:[PostalCode] Now we want to build the DAL for Person. How should the PersonBO look and when does the joins occure? Is it a business-layer problem to fetch all citynames and possible addressses pr. Person? or should the DAL complete all this before returning the PersonBO to the BAL ? Class PersonBO { public int ID {get;set;} public string Name {get;set;} public List<AddressBO> {get;set;} // Question #1 } // Q1: do we retrieve the objects before returning the PersonBO and should it be an Array instead? or is this totally wrong for n-tier/3-tier?? Class AddressBO { public int ID {get;set;} public string StreetName {get;set;} public int PostalCode {get;set;} // Question #2 } // Q2: do we make the lookup or just leave the PostalCode for later lookup? Can anyone explain in what order to pull which objects? Constructive criticism is very welcome. :o)

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • Ajax request gets to server but page doesn't update - Rails, jQuery

    - by Jesse
    So I have a scenario where my jQuery ajax request is hitting the server, but the page won't update. I'm stumped... Here's the ajax request: $.ajax({ type: 'GET', url: '/jsrender', data: "id=" + $.fragment().nav.replace("_link", "") }); Watching the rails logs, I get the following: Processing ProductsController#jsrender (for 127.0.0.1 at 2010-03-17 23:07:35) [GET] Parameters: {"action"=>"jsrender", "id"=>"products", "controller"=>"products"} ... Rendering products/jsrender.rjs Completed in 651ms (View: 608, DB: 17) | 200 OK [http://localhost/jsrender?id=products] So, it seems apparent to me that the ajax request is getting to the server. The code in the jsrender method is being executed, but the code in the jsrender.rjs doesn't fire. Here's the method, jsrender: def jsrender @currentview = "shared/#{params[:id]}" respond_to do |format| format.js {render :template => 'products/jsrender.rjs'} end end For the sake of argument, the code in jsrender.rjs is: page<<"alert('this works!');" Why is this? I see in the params that there is no authenticity_token, but I have tried passing an authenticity_token as well with the same result. Thanks in advance.

    Read the article

  • Blocking error in celery

    - by dmitry
    I have no idea what's this. Python 2.7 + django-1.5.1 + httpd + rabbitmq + django-celery==3.0.17 Tasks are not executed because of some error. Below is celery's log. Maybe someone has faced it before. [2013-06-24 17:10:03,792: CRITICAL/MainProcess] Can't decode message body: AttributeError("'JoinInfo' object has no attribute '__dict__'",) (type:u'application/x-python-serialize' encoding:u'binary' raw:'\'\\x80\\x02}q\\x01(U\\x07expiresq\\x02NU\\x03utcq\\x03\\x88U\\x04argsq\\x04cdjango.contrib.auth.models\\nUser\\nq\\x05)\\x81q\\x06}q\\x07(U\\x08usernameq\\x08X\\x19\\x00\\x00\\[email protected]\\nfirst_nameq\\tX\\x05\\x00\\x00\\x00BibbyU\\tlast_nameq\\nX\\x08\\x00\\x00\\x00OffshoreU\\r_client_cacheq\\x0bccopy_reg\\n_reconstructor\\nq\\x0ccbongoregistration.models\\nClient\\nq\\rc__builtin__\\nobject\\nq\\x0eN\\x87Rq\\x0f}q\\x10(h\\nX\\x08\\x00\\x00\\x00OffshoreU\\x1bpurchase_confirmation_emailq\\x11X\\x1f\\x00\\x00\\[email protected]\\x1dpurchase_confirmation_email_1q\\x12X!\\x00\\x00\\[email protected]\\x06_stateq\\x13cdjango.db.models.base\\nModelState\\nq\\x14)\\x81q\\x15}q\\x16(U\\x06addingq\\x17\\x89U\\x02dbq\\x18U\\x07defaultq\\x19ubU\\x0buser_ptr_idq\\x1aJ\\xb4\\xa2\\x03\\x00U\\x08is_staffq\\x1b\\x89U\\x08postcodeq\\x1cX\\x08\\x00\\x00\\x00AB11 5BSU\\x0cdegree_limitq\\x1dK\\x06U\\x07messageq\\x1eX\\xd1E\\x00\\x00<table id="container" style="margin: 0px; padding: 0px; width: 100%; background-color: #ffffff;" cellspacing="0" cellpadding="0"... (22911b)'') Traceback (most recent call last): File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/messaging.py", line 556, in _receive_callback decoded = None if on_m else message.decode() File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/transport/base.py", line 147, in decode self.content_encoding, accept=self.accept) File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/serialization.py", line 187, in decode return decode(data) File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/serialization.py", line 74, in pickle_loads return load(BytesIO(s)) AttributeError: 'JoinInfo' object has no attribute '__dict__'

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

  • INSERT INTO table doesn't work???

    - by Joann
    I found a tutorial from nettuts and it has a source code in it so tried implementing it in my site.. It is working now. However, it doesn't have a Registration system so I am making one. The thing is, as I have expected, my code is not working... It doesn't seem to know how to INSERT into the database. Here's the function that inserts data into the db. function register_User($un, $email, $pwd) { $query = "INSERT INTO users( username, password, email ) VALUES(:uname, :pwd, :email) LIMIT 1"; if($stmt = $this->conn->prepare($query)) { $stmt->bind_param(':uname', $un); $stmt->bind_param(':pwd', $pwd); $stmt->bind_param(':email', $email); $stmt->execute(); if($stmt->fetch()) { $stmt->close(); return true; } else return "The username or email you entered is already in use..."; } } I have debugged the connection to the database from within the class, it says it's connected. I tried using this method instead: function register($un, $email, $pwd) { $registerquery = $this->conn->query( "INSERT INTO users(uername, password, email) VALUES('".$un."', '".$pwd."', '".$email."')"); if($registerquery) { echo "<h4>Success</h4>"; } else { echo "<h4>Error</h4>"; } } And it echos "Error"... Can you please help me pen point the error in this??? :(

    Read the article

  • Spring Security ACL: NotFoundException from JDBCMutableAclService.createAcl

    - by user340202
    Hello, I've been working on this task for too long to abandon the idea of using Spring Security to achieve it, but I wish that the community will provide with some support that will help reduce the regret that I have for choosing Spring Security. Enough ranting and now let's get to the point. I'm trying to create an ACL by using JDBCMutableAclService.createAcl as follows: [code] public void addPermission(IWFArtifact securedObject, Sid recipient, Permission permission, Class clazz) { ObjectIdentity oid = new ObjectIdentityImpl(clazz.getCanonicalName(), securedObject.getId()); this.addPermission(oid, recipient, permission); } @Override @Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_UNCOMMITTED, readOnly = false) public void addPermission(ObjectIdentity oid, Sid recipient, Permission permission) { SpringSecurityUtils.assureThreadLocalAuthSet(); MutableAcl acl; try { acl = this.mutableAclService.createAcl(oid); } catch (AlreadyExistsException e) { acl = (MutableAcl) this.mutableAclService.readAclById(oid); } // try { // acl = (MutableAcl) this.mutableAclService.readAclById(oid); // } catch (NotFoundException nfe) { // acl = this.mutableAclService.createAcl(oid); // } acl.insertAce(acl.getEntries().length, permission, recipient, true); this.mutableAclService.updateAcl(acl); } [/code] The call throws a NotFoundException from the line: [code] // Retrieve the ACL via superclass (ensures cache registration, proper retrieval etc) Acl acl = readAclById(objectIdentity); [/code] I believe this is caused by something related to Transactional, and that's why I have tested with many TransactionDefinition attributes. I have also doubted the annotation and tried with declarative transaction definition, but still with no luck. One important point is that I have used the statement used to insert the oid in the database earlier in the method directly on the database and it worked, and also threw a unique constraint exception at me when it tried to insert it in the method. I'm using Spring Security 2.0.8 and IceFaces 1.8 (which doesn't support spring 3.0 but definetely supprorts 2.0.x, specially when I keep caling SpringSecurityUtils.assureThreadLocalAuthSet()). My AppServer is Tomcat 6.0, and my DB Server is MySQL 6.0 I wish to get back a reply soon because I need to get this task off my way

    Read the article

  • Returning data from SQL Server reporting web service call

    - by user79339
    Hi, I am generating a report that contains the version number. The version number is stored in the DB and retrieved/incremented as part of the report generation. The only problem is, I am calling SSRS via a web service call, which returns the generated report as a byte array. Is there any way to get the version number out of this generated report? For example to display a dialog that says "You generated Status Report, Version number 3". I tried passing in an output parameter and setting it inside the storedproc. Its modified when i execute it in sql management studio, but not in the reporting studio. Or atleast i can't seem to bind to the modified, post execution value (using expression "=Parameters!ReportVersion.Value"). Of course, I could get/increment the version number from database myself before calling the SSRS webservice and pass it along as a parameter to the Report, but that might cause concurrancy problems. On the whole, it just seems neater for the storedproc to access/generate a version number and return it to the ReportingEngine, which will generate the report with the version number and return the updated version number to the WebService client. Any thoughts/Ideas?

    Read the article

  • Android Maven and Refresh Problem

    - by antonio Musella
    Hi all, i've a strange problem with maven and android I've 3 maven project and 2 normal java maven project divided in this manner : normal project : model project ... packaged as jar ... contains java Pojo Bean and Interface. Dao Project ... packaged as jar ... contains Db Logic - Depend on model Project Android Application Maven project ContentProvider ... packaged as apk ... contains ContentProviders only. Depends on Dao Project Editors ... packaged as apk ... contains only Editor, Depends on Dao project MainApp ... packaged as apk ... contains MyApp, Depends on DAO ... The Problem is that if i modify DAO Project , Then do a maven clean and maven install of all apk project, then run as Android Application within Eclipse, i don't see updated app on my Emulator. Nicely if i shut down my ubuntu workstation and restart it i can see The updated app on my Emulator. Do you know a solution for this issue ? thanks and regards

    Read the article

  • Dynamically generate client-side HTML form control using JavaScript and server-side Python code in Google App Engine

    - by gisc
    I have the following client-side front-end HTML using Jinja2 template engine: {% for record in result %} <textarea name="remark">{{ record.remark }}</textarea> <input type="submit" name="approve" value="Approve" /> {% endfor %} Thus the HTML may show more than 1 set of textarea and submit button. The back-end Python code retrieves a variable number of records from a gql query using the model, and pass this to the Jinja2 template in result. When a submit button is clicked, it triggers the post method to update the record: def post(self): if self.request.get('approve'): updated_remark = self.request.get('remark') record.remark = db.Text(updated_remark) record.put() However, in some instances, the record updated is NOT the one that correspond to the submit button clicked (eg if a user clicks on record 1 submit, record 2 remark gets updated, but not record 1). I gather that this is due to the duplicate attribute name remark. I can possibly use JavaScript/jQuery to generate different attribute names. The question is, how do I code the back-end Python to get the (variable number of) names generated by the JavaScript? Thanks.

    Read the article

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • treeview dynamically populated

    - by Laziale
    Hello everyone - I have this treeview control where I want to put uploaded files on the server. I want to be able to create the nodes and the child nodes dynamically from the database. I am using this query for getting the data from DB: SELECT c.Category, d.DocumentName FROM Categories c INNER JOIN DocumentUserFile d ON c.ID = d.CategoryId WHERE d.UserId = '9rge333a-91b5-4521-b3e6-dfb49b45237c' The result from that query is this one: Agendas transactions.pdf Minutes accounts.pdf I want to have the treeview sorted that way too. I am trying with this piece of code: TreeNode tn = new TreeNode(); TreeNode tnSub = new TreeNode(); foreach (DataRow dt in tblTreeView.Rows) { tn.Text = dt[0].ToString(); tn.Value = dt[0].ToString(); tnSub.Text = dt[1].ToString(); tnSub.NavigateUrl = "../downloading.aspx?file=" + dt[1].ToString() +"&user=" + userID; tn.ChildNodes.Add(tnSub); tvDocuments.Nodes.Add(tn); } I am getting the treeview populated nicely for the 1st category and the document under that category, but I can't get it to work when I want to show more documents under that category, or even more complicate to show new category beneath the 1st one with documents from that category. How can I solve this? I appreciate the answers a lot. Thanks, Laziale

    Read the article

  • [MFC] What is the reciprocal of CComboBox.GetItemData?

    - by Hamish Grubijan
    Instead of associating objects with Combo Box items, I associate long ids representing choices. They come from a database, so it seems natural to do so anyway. Now, I persist the id and not the index of the user's selection, so that the choice is remembered across sessions. If id no longer exists in database - no big deal. The choice will be messed up once. If db does not change, however, then it would be a great success ;) Here is how I get the id : chosenSomethingIndex = cmbSomething.GetCurSel(); lastSomethingId = cmbSomething.GetItemData(chosenSomethingIndex); How do I reverse this? When I load the stored value for user's last choice, I need to convert that id into an index. I can do: cmbSomething.SetCurSel(chosenSomethingIndex); However, how can I attempt (it might not exist) to get an index once I have an id? I am looking for a reciprocal function to GetItemData I am using VS2008, probably latest version of MFC, whatever that is. Thank you.

    Read the article

  • How to implement message queuing and handling in AWS with NServiceBus

    - by Pete Lunenfeld
    I am creating a new ASP MVC order application in the Amazon (AWS) cloud with the persistence layer at my local datacenter. I will be using the CQRS pattern. The goal of the project is high availability using Queue(s) to store and forward writes (commands/events) that can be picked up and handled asynchronously at my local datacenter. Then, ff the WAN or my local datacenter fails, my cloud MVC app can still take orders and just queue them up until processing can resume. My first thought was to use AWS SQS for the queuing and create my own queue consumer/dispatcher/handler in my own c# application to process the incoming messages/events. MVC (@ Amazon) -- Event/POCO -- SQS -- QueueReader (@ my datacenter) -- DB Then I found NServiceBus. NSB seems to handle lots of details very nicely: message handling, retries, error handling, etc. I hate to reinvent the wheel, and NServiceBus seems like a full featured and mature product that would be perfect for me. But on further research, it does NOT look like NServiceBus is really meant to be used over the WAN in physically separated environments (Cloud to my Datacenter). Google and SO don't really paint a good picture of using NServiceBus across the WAN like I need. How can I use NServiceBus across the WAN? Or is there a better solution to handle queuing and message handling between Amazon an my local datacenter?

    Read the article

  • If we make a number every millisecond, how much data would we have in a day?

    - by Roger Travis
    I'm a bit confused here... I'm being offered to get into a project, where would be an array of certain sensors, that would give off reading every millisecond ( yes, 1000 reading in a second ). Reading would be a 3 or 4 digit number, for example like 818 or 1529. This reading need to be stored in a database on a server and accessed remotely. I never worked with such big amounts of data, what do you think, how much in terms of MBs reading from one sensor for a day would be?... 4(digits)x1000x60x60x24 ... = 345600000 bits ... right ? about 42 MB per day... doesn't seem too bad, right? therefor a DB of, say, 1 GB, would hold 23 days of info from 1 sensor, correct? I understand that MySQL & PHP probably would not be able to handle it... what would you suggest, maybe some aps? azure? oracle? ... Thansk!

    Read the article

  • Advanced search engine or server for relational database [closed]

    - by Pawel
    In my current project we are storing big volume of data in relational database. One of the recent key requirements is to enrich application by adding some advanced search capabilities. In the Project, performance is one of the important factors due to very large tables (10+ milions of records) with parent-children relations (for example: multi-level parent-child relationship, where I am looking for all parents with specific children). The search engine should also be able to check these references for hits. I have found some potential engines on stack overflow, however it looks like that all of them are dedicated rather for text search than relational db and hosted on linux os: lucene Solr Sphinx As I understand some of them use documents as a source of searching, but is it possible or efficient to create programmaticaly documents based on my relational data? As I am not familiar with all of their features/capabilities can anyone please make some recommendations or propose some different solution? To summarize my requirements: framework/engine to search relational database including decendants. support for Microsoft SQL Server can be used in .NET applications preferably hosted on Windows systems Does any of mentioned above are able to solve my problem? do you know any better solution?

    Read the article

  • is there a better way to write this frankenstein LINQ query that searches for values in a child tabl

    - by MRV
    I have a table of Users and a one to many UserSkills table. I need to be able to search for users based on skills. This query takes a list of desired skills and searches for users who have those skills. I want to sort the users based on the number of desired skills they posses. So if a users only has 1 of 3 desired skills he will be further down the list than the user who has 3 of 3 desired skills. I start with my comma separated list of skill IDs that are being searched for: List<short> searchedSkillsRaw = skills.Value.Split(',').Select(i => short.Parse(i)).ToList(); I then filter out only the types of users that are searchable: List<User> users = (from u in db.Users where u.Verified == true && u.Level > 0 && u.Type == 1 && (u.UserDetail.City == city.SelectedValue || u.UserDetail.City == null) select u).ToList(); and then comes the crazy part: var fUsers = from u in users select new { u.Id, u.FirstName, u.LastName, u.UserName, UserPhone = u.UserDetail.Phone, UserSkills = (from uskills in u.UserSkills join skillsJoin in configSkills on uskills.SkillId equals skillsJoin.ValueIdInt into tempSkills from skillsJoin in tempSkills.DefaultIfEmpty() where uskills.UserId == u.Id select new { SkillId = uskills.SkillId, SkillName = skillsJoin.Name, SkillNameFound = searchedSkillsRaw.Contains(uskills.SkillId) }), UserSkillsFound = (from uskills in u.UserSkills where uskills.UserId == u.Id && searchedSkillsRaw.Contains(uskills.SkillId) select uskills.UserId).Count() } into userResults where userResults.UserSkillsFound > 0 orderby userResults.UserSkillsFound descending select userResults; and this works! But it seems super bloated and inefficient to me. Especially the secondary part that counts the number of skills found. Thanks for any advice you can give. --r

    Read the article

  • How to make write operation idempotent?

    - by Morgan Cheng
    I'm reading article about recently release Gizzard sharding framework by twitter(http://engineering.twitter.com/2010/04/introducing-gizzard-framework-for.html). It mentions that all write operations must be idempotent to make sure high reliability. According to wikipedia, "Idempotent operations are operations that can be applied multiple times without changing the result." But, IMHO, in Gazzard case, idempotent write operation should be operations that sequence doesn't matter. Now, my question is: How to make write operation idempotent? The only thing I can image is to have a version number attached to each write. For example, in blog system. Each blog must have a $blog_id and $content. In application level, we always write a blog content like this write($blog_id, $content, $version). The $version is determined to be unique in application level. So, if application first try to set one blog to "Hello world" and second want it to be "Goodbye", the write is idempotent. We have such two write operations: write($blog_id, "Hello world", 1); write($blog_id, "Goodbye", 2); These two operations are supposed to changed two different records in DB. So, no matter how many times and what sequence these two operations executed, the results are same. This is just my understanding. Please correct me if I'm wrong.

    Read the article

  • How do I tweak columns in a Flat File Destination in SSIS?

    - by theog
    I have an OLE DB Data source and a Flat File Destination in the Data Flow of my SSIS Project. The goal is simply to pump data into a text file, and it does that. Where I'm having problems is with the formatting. I need to be able to rtrim() a couple of columns to remove trailing spaces, and I have a couple more that need their leading zeros preserved. The current process is losing all the leading zeros. The rtrim() can be done by simple truncation and ignoring the truncation errors, but that's very inelegant and error prone. I'd like to find a better way, like actually doing the rtrim() function where needed. Exploring similar SSIS questions & answers on SO, the thing to do seems to be "Use a Script Task", but that's ususally just thrown out there with no details, and it's not at all an intuitive thing to set up. I don't see how to use scripting to do what I need. Do I use a Script Task on the Control Flow, or a Script Component in the Data Flow? Can I do rtrim() and pad strings where needed in a script? Anybody got an example of doing this or similar things? Many thanks in advance.

    Read the article

  • Interesting LinqToSql behaviour

    - by Ben Robinson
    We have a database table that stores the location of some wave files plus related meta data. There is a foreign key (employeeid) on the table that links to an employee table. However not all wav files relate to an employee, for these records employeeid is null. We are using LinqToSQl to access the database, the query to pull out all non employee related wav file records is as follows: var results = from Wavs in db.WaveFiles where Wavs.employeeid == null; Except this returns no records, despite the fact that there are records where employeeid is null. On profiling sql server i discovered the reason no records are returned is because LinqToSQl is turning it into SQL that looks very much like: SELECT Field1, Field2 //etc FROM WaveFiles WHERE 1=0 Obviously this returns no rows. However if I go into the DBML designer and remove the association and save. All of a sudden the exact same LINQ query turns into SELECT Field1, Field2 //etc FROM WaveFiles WHERE EmployeeID IS NULL I.e. if there is an association then LinqToSql assumes that all records have a value for the foreign key (even though it is nullable and the property appears as a nullable int on the WaveFile entity) and as such deliverately constructs a where clause that will return no records. Does anyone know if there is a way to keep the association in LinqToSQL but stop this behaviour. A workaround i can think of quickly is to have a calculated field called IsSystemFile and set it to 1 if employeeid is null and 0 otherwise. However this seems like a bit of a hack to work around strange behaviour of LinqToSQl and i would rather do something in the DBML file or define something on the foreign key constraint that will prevent this behaviour.

    Read the article

  • Why does Rails screw up timezones when I am editing a resource?

    - by DJTripleThreat
    Steps to produce this: prompt>rails test_app prompt>cd test_app prompt>script/generate scaffold date_test my_date:datetime prompt>rake db:migrate now edit your app/views/date_tests/edit.html.erb: <h1>Editing date_test</h1> <% form_for(@date_test) do |f| %> <%= f.error_messages %> <p> RIGHT!<br/> <%= text_field_tag @date_test, f.object.my_date %> </p> <p> WRONG!<br /> <%= f.text_field :my_date %> </p> <p> <%= f.submit 'Update' %> </p> <% end %> <%= link_to 'Show', @date_test %> | <%= link_to 'Back', date_tests_path %> now edit your config/environment.rb: #add this config.time_zone = 'Central Time (US & Canada)' This recreates the problem I am having in my actual app. The problem with my app is that I'm storing a date in a hidden field and rendering a "user friendly" version. Creating a resource works fine but as soon as I try to edit it the time changes (it adds the difference between my current time zone configuration and UTC). go to http://localhost:3000/date_tests/new and save the time then go to reedit it and you will have two different representations of the date/time one which will save incorrectly and the other that will.

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >