Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 276/319 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • How Can I Find What's Causing My Transaction to Get Promoted?

    - by Damian Powell
    I have web site which serves web services (a mixture of .asmx and WCF) which is mostly using LINQ to SQL and System.Transactions. Occaisionally we see the transaction get promoted to a distributed transaction which causes problems because our web servers are isolated from our databases in such a way that it is not possible for us to use MSDTC. I have configured tracing for System.Transactions by adding the following to my web.config: <system.diagnostics> <sources> <source name="System.Transactions" switchValue="Information"> <listeners> <add name="tx" type="System.Diagnostics.XmlWriterTraceListener" initializeData="tx.log" /> </listeners> </source> </sources> </system.diagnostics> It's very interesting and shows me when the transaction is promoted, but I find that it doesn't really help be discover why. Is there an equivalent tracing mechanism for ADO.NET that will show me when connections are created, including the variables that affect pooling (user, cnn string, transaction scope)?

    Read the article

  • Access Controller Context/ TempData from business objects

    - by thanikkal
    I am trying to build a session/tempdata provider that can be swapped. The default provider will work on top of asp.net mvc and it needed to access the .net mvc TempData from the business object class. I know the tempdata is available through the controller context, but i cant seem to find if that is exposed through HttpContext or something. I dont really want to pass the Controller context as an argument as that would dilute my interface definition since only asp.net based session provider needs this, other (using NoSQL DB etc) doesn't care about Controller Context. To clarify further, adding little more code here. my ISession interface look like this. and when this code goes to production, the session/tempdata is expected to work using NoSql db. But i also like to have another implementation that works on top of asp.net mvc session/tempdata for my dev testing etc. public interface ISession { T GetTempData<T>(string key); void PutTempData<T>(string key, T value); T GetSessiondata<T>(string key); void PutSessiondata<T>(string key, T value); }

    Read the article

  • php mailer attachments

    - by Matthew
    I have been using this script to send emails to certain staff but because of changes to my system i have to now send attachements with the email and i have tried multipul peices of code to accomplish this but have been unsuccessful... I still recive the email but without the attachement which is quite pointless in this case i have placed the script i am using bellow i have removed the real addresses i was using and smtp server require("PHPMailer/class.phpmailer.php"); $mail = new PHPMailer(); $mail->IsSMTP(); // set mailer to use SMTP $mail->Host = "SMTP.SErver.com"; $mail->From = "[email protected]"; $mail->FromName = "HCSC"; $mail->AddAddress("[email protected]", "Example"); $mail->AddReplyTo("[email protected]", "Hcsc"); $mail->WordWrap = 50; $mail->IsHTML(false); $mail->Subject = "AuthSMTP Test"; $mail->Body = "AuthSMTP Test Message!"; $mail->AddAttachment("matt.txt"); //this is basicly what i am trying to attach as a test but will be using excel spreadsheets in the production if(!$mail->Send()) { echo "Message could not be sent. <p>"; echo "Mailer Error: " . $mail->ErrorInfo; exit; } echo "Message has been sent"; i have also tried a few other emtods of attaching the file but none seem to work any help is greatly appricated

    Read the article

  • Do database engines other than SQL Server behave this way?

    - by Yishai
    I have a stored procedure that goes something like this (pseudo code) storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin select * from SOME_VIEW order by somecolumn end else if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent end All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this: storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin set param2 = null set param3 = null end if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that: Queries that ran fine for two years just stop working as the data grows. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine? This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • How to design data storage for partitioned tagging system?

    - by Morgan Cheng
    How to design data storage for huge tagging system (like digg or delicious)? There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system? The tagging system basically has 3 tables: Item (item_id, item_content) Tag (tag_id, tag_title) TagMapping(map_id, tag_id, item_id) That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy. For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag. But, how to partition table TagMapping? The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id. As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard. Is there any better solution to solve this many-to-many partition problem?

    Read the article

  • Building out a well-structured service layer

    - by Chris Stewart
    First, I want to say that it has been awhile since I've gotten into the kind of detail I am at currently. Lately, I've been very much in the SharePoint world and my entire thought process was focused there for quite some time. I'm very glad to be creating databases again, writing "lower level" code to deal with data access, and so forth. I'm working on a very simple web application and taking the opportunity to reacquaint myself with the way I used to structure my projects and various layers of code. For instance, I might have created something like this the last time I went about building something basic from scratch: - MyProject/ -- Domain/ --- Impl/ ---- Person -- Model/ --- IPersonRepository --- Impl/ ---- PersonRepository : IPersonRepository -- Services --- IPersonService --- Impl/ ---- PersonService : IPersonService That would have been the project I did the real work in, and then referenced in the ASP.NET project. My approach was very much inspired by what I saw from the CodeCampServer project as at that time ASP.NET MVC was still very new and it was the only open project I could find actively being developed, and by solid people at that. What ways are you going about structuring your projects and code, when it comes to a general problem you're working on? Certainly various problems can put constraints on this, but assume it's a basic problem without specific needs that affect the structure and layout of your code.

    Read the article

  • MySQL " identify storage engine statement"

    - by sammysmall
    This IS NOT a Homework question! While building my current student database project I realized that I may want to identify comprehensive information about a database design in the future. More-so if I am fortunate enough to get a job in this field and were handed a database project how could I break down certain elements for identification... In all of my previous designs I have been using MySQL Community Server (GPL) 5.1.42, I thought (duh) that I was using the MyISAM based on most of my text-book instruction and MySQL 5.0 Reference Manual :: 13 Storage Engines :: 13.1 The MyISAM Storage Engine I determined that this was in fact incorrect for this version and the use of "SHOW ENGINES" at the console... No problem, figured out why they have "versions" the need to pay attention to what version is being used, and the need for a means to determine what I am about to mess up "if" I do not pay attention to detail... Q1. Specifically what statement will identify the version used by someone elses initial database creation? (since I created my own databases I know what version I used) Q2. Specifically what statement will identify the storage engine that the developer used when creating the database. (I specified a particular database in my collection then tried SHOW Engine, did not work, then tried to just get the metadata from one table in that database: mysql SELECT duck_cust, table_type, engine - FROM INFORMATION_SCHEMA.tables - WHERE table_schema = 'tp' - ORDER BY table_type ASC, table_name DESC; as this was not really what I wanted (and did not work) I am looking for some direction from the pros... Q3. (If you really have the inclination to continue helping) If I were to access a database from an earlier/later "version" are there backward/forward compatibility issues for maintaining/updating data between versions? Please and Thank you in advance for your time and efforts! sammysmall

    Read the article

  • Python: Does one of these examples waste more memory?

    - by orokusaki
    In a Django view function which uses manual transaction committing, I have: context = RequestContext(request, data) transaction.commit() return render_to_response('basic.html', data, context) # Returns a Django ``HttpResponse`` object which is similar to a dictionary. I think it is a better idea to do this: context = RequestContext(request, data) response = render_to_response('basic.html', data, context) transaction.commit() return response If the page isn't rendered correctly in the second version, the transaction is rolled back. This seems like the logical way of doing it albeit there won't likely be many exceptions at that point in the function when the application is in production. But... I fear that this might cost more and this will be replete through a number of functions since the application is heavy with custom transaction handling, so now is the time to figure out. If the HttpResponse instance is in memory already (at the point of render_to_response()), then what does another reference cost? When the function ends, doesn't the reference (response variable) go away so that when Django is done converting the HttpResponse into a string for output Python can immediately garbage collect it? Is there any reason I would want to use the first version (other than "It's 1 less line of code.")?

    Read the article

  • passenger won't spawn more than 6 instances despite passenger_max_pool_size = 30

    - by mrD
    I have some problems with passenger + nginx and hope someone might be able help me and direct me in the right direction. I've set the passenger_max_pool_size to 30 but passenger never spawns more than 6 instances. I'm loading a webpage that uses ajax to load 30 sub pages from the server but because passenger only spawns 6 instances they are queued. What makes me confused is that Waiting on global queue is 0 but I can see in my browser that everything gets queued. When the first 6 ajax requests are done the next 6 starts loading. What am I missing? :) This is the output from passenger-status (I had about 24 requests in the browser waiting for response from the server when I checked this status) ----------- General information ----------- max = 30 count = 6 active = 6 inactive = 0 Waiting on global queue: 0 ----------- Domains ----------- /srv/rails/production/current: PID: 28428 Sessions: 1 Processed: 42 Uptime: 5m 43s PID: 28424 Sessions: 1 Processed: 23 Uptime: 5m 43s PID: 28422 Sessions: 1 Processed: 7 Uptime: 5m 43s PID: 28420 Sessions: 1 Processed: 22 Uptime: 6m 0s PID: 28426 Sessions: 1 Processed: 39 Uptime: 5m 43s PID: 28430 Sessions: 1 Processed: 7 Uptime: 5m 43s These are my passenger related settings in nginx.conf http { passenger_root /opt/ruby/lib/ruby/gems/1.8/gems/passenger-2.2.11; passenger_ruby /opt/ruby/bin/ruby; passenger_max_pool_size 30;

    Read the article

  • REST from asp.net 2.0

    - by weslleywang
    Hi, I just built a asp.net 2.0 web site. Now I need add REST web service so I can communicate with another web application. I've worked with 2 SOAP web service project before, but have no experise with REST at all. I guess only a coupleweeks would works fine. after googling, I found it's not that easy. This is what I found: There is NO REST out of box of asp.net. WCF REST Starter Kit Codeplex Preview 2 base on .net 3.5 and still in beta Rest ASP.NET Example REST Web Services in ASP.NET 2.0 (C#) Exyus Handling POST and PUT methods with Lullaby ADO.NET Data Service ... Now my question, a) Is a REST solution for .net 2.0? if yes, which one is best solution? b) if I have to, how hard to migrate my asp.net from 2.0 to 3.5? is it as simple as just compile, or I have to change a lot code? c) WCF REST Starter Kit is good enough to use in production? d) Do I have to learn WCF first, then WCF REST Starter Kit? where is the best place to start? I appreciate any help here. Thanks Wes

    Read the article

  • Maven + SSDM Build and Runtime Environment Automation

    - by Randy
    Preface: My Company, like most, has several run-time environments and several release versions which themselves are composed of different versions of various jars. For example, let us consider release versions 1.1, 1.2, and 1.3 of Software X, which may be deployed to a developer computer, testing, or production. Software-x-1.1 is itself composed of jarA-0.9.1 and jarB-0.7.5, but software-x-1.3 is composed of jarA-1.7.31 and jarB-0.8.1. Currently we use Spring's PropertyPlaceholderConfigurer to configure run-time variables (such as database credentials), however, properties also change with release versions. We also use Maven 2 POM version 4 to specify which versions of our code need to be used. We place the version numbers of our jars as properties within profiles (dev,test,prod) inside of the parent pom and then reference those version numbers in all project poms. As of right now, we have no way to specify which project versions pertain to a given release other than the most current one. Moreover, we deploy our run-time configurations to the SSDM pickup which then configures and creates the services defined by the built versions of our software. -- Questions: Is there any procedure/tool we can use to build our product by merely providing the run-time environment and version number? IE "build 1.1 dev"? Is there anyway we can store the required jar versions for each release build? We are currently versioning all files, including the parent pom, but merely versioning the parent pom does not record which release version is pertinent to that parent pom. What else can we do to further automate the process of builds? For example, if we could manage run-time configurations within the parent pom that would be a step in the right direction, but that seems like a violation of scope. Any tool outside of our framework is inconceivable at this point, but not in the far future. Summary: How can we automate our build process to the fullest extent without being error prone?

    Read the article

  • Mysql random rows

    - by n00b
    please read the whole question... 90% of you dont seem to do that and some of you only read the title obviously... and if you dont know the solution, dont answer - i wont have to downvote you -.-'' im entertaining the idea of getting random rows directly from mysql. what i found was SELECT * FROM tablename WHERE somefield='something' ORDER BY RAND() LIMIT 5 but even i see how slow that would be.. is the only way to do this doing something like SELECT * FROM tablename WHERE somefield='something' LIMIT RAND(aincrementvalue-5), 1 5 times? or is there a way that i with my little knowlege of databases cant come up with ? (no i dont want random indexes. i hate the idea of them...) @commenters - please first look, then think, then look again, think again and then post. i wont point fingers but i dislike stupid comments and why i think random indexes are a nasty hack ? it doesnt give you random results. it gives you x results from a random index in a predefined order its like a gapless id only in the wrong order if you fetch by 1 row and get true randomness you fall back to my method but with an additional junk field finally the reason the field exists is only to serve as a helper to something that can be done without it with almost same performance (but the quality (randomness) is better), so it is a nasty hack ;) i solved it, look @ my answer... if you think its incorrect please tell me :)

    Read the article

  • Using a CMS with an external database

    - by George Reith
    I am looking at building an external site with a CMS, probably Drupal or ExpressionEngine. The problem is that our company already has a membership database that is designed to work with our existing enterprise software. Migrating data from the database manually is not an option as modifications and new data must be accessible in real-time. Because the design of the external database will differ from the CMS's own I have decided the best way forward is to use two databases and force the CMS to use the external to read user information (cannot write to) and a local for everything else the CMS needs to do (read + write). Is this feasible with these Drupal or ExpressionEngine? Ideally I need to be able to use hooks as I do not wan't to modify core CMS files. Sifting through the docs I am not able to find what I would hook into for ether CMS. (Note: I know it is possible, but I want to know if it's feasible). Finally if there is a better way of handling this situation please also chime in. Perhaps there is something at the database level to reference a field or table in an external database? I'm clutching at straws someone can point me in the right direction I'm sure.

    Read the article

  • Seeking suggestions on redesigning the interface

    - by ratkok
    As a part of maintaining large piece of legacy code, we need to change part of the design mainly to make it more testable (unit testing). One of the issues we need to resolve is the existing interface between components. The interface between two components is a class that contains static methods only. Simplified example: class ABInterface { static methodA(); static methodB(); ... static methodZ(); }; The interface is used by component A so that different methods can use ABInterface::methodA() in order to prepare some input data and then invoke appropriate functions within component B. Now we are trying to redesign this interface for various reasons: Extending our unit test coverage - we need to resolve this dependency between the components and stubs/mocks are to be introduced The interface between these components diverged from the original design (ie. a lots of newer functions, used for the inter-component i/f are created outside this interface class). The code is old, changed a lot over the time and needs to be refactored. The change should not be disruptive for the rest of the system. We try to limit leaving many test-required artifacts in the production code. Performance is very important and should be no (or very minimal) degradation after the redesign. Code is OO in C++. I am looking for some ideas what approach to take. Any suggestions on how to do this efficiently?

    Read the article

  • Simple App Engine Sessions Implementation

    - by raz0r
    Here is a very basic class for handling sessions on App Engine: """Lightweight implementation of cookie-based sessions for Google App Engine. Classes: Session """ import os import random import Cookie from google.appengine.api import memcache _COOKIE_NAME = 'app-sid' _COOKIE_PATH = '/' _SESSION_EXPIRE_TIME = 180 * 60 class Session(object): """Cookie-based session implementation using Memcached.""" def __init__(self): self.sid = None self.key = None self.session = None cookie_str = os.environ.get('HTTP_COOKIE', '') self.cookie = Cookie.SimpleCookie() self.cookie.load(cookie_str) if self.cookie.get(_COOKIE_NAME): self.sid = self.cookie[_COOKIE_NAME].value self.key = 'session-' + self.sid self.session = memcache.get(self.key) if self.session: self._update_memcache() else: self.sid = str(random.random())[5:] + str(random.random())[5:] self.key = 'session-' + self.sid self.session = dict() memcache.add(self.key, self.session, _SESSION_EXPIRE_TIME) self.cookie[_COOKIE_NAME] = self.sid self.cookie[_COOKIE_NAME]['path'] = _COOKIE_PATH print self.cookie def __len__(self): return len(self.session) def __getitem__(self, key): if key in self.session: return self.session[key] raise KeyError(str(key)) def __setitem__(self, key, value): self.session[key] = value self._update_memcache() def __delitem__(self, key): if key in self.session: del self.session[key] self._update_memcache() return None raise KeyError(str(key)) def __contains__(self, item): try: i = self.__getitem__(item) except KeyError: return False return True def _update_memcache(self): memcache.replace(self.key, self.session, _SESSION_EXPIRE_TIME) I would like some advices on how to improve the code for better security. Note: In the production version it will also save a copy of the session in the datastore. Note': I know there are much more complete implementations available online though I would like to learn more about this subject so please don't answer the question with "use that" or "use the other" library.

    Read the article

  • Orbited exception Data must not be unicode.

    - by Sid
    I am working with orbited and once I switch on orbited in production mode it throws the following error on my screen -- <exception caught here> --- File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 150, in process self.render(resrc) File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 157, in render body = resrc.render(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 21, in render self.conn.transportOpened(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/cometsession.py", line 322, in transportOpened self.cometTransport.flush() File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 45, in flush self.write(self.packets) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/htmlfile.py", line 42, in write self.request.write(payload); File "/usr/lib/python2.6/dist-packages/twisted/web/http.py", line 862, in write self.transport.write(data) File "/usr/lib/python2.6/dist-packages/twisted/internet/tcp.py", line 420, in write abstract.FileDescriptor.write(self, bytes) File "/usr/lib/python2.6/dist-packages/twisted/internet/abstract.py", line 170, in write raise TypeError("Data must not be unicode") exceptions.TypeError: Data must not be unicode I have absolutely no clue as to what could be the problem. Could anyone point me in the right direction.

    Read the article

  • How to get a unique WindowRef in a dockable Qt application on Mac

    - by Robin
    How do I get a unique WindowRef from a Qt application that includes docked windows on the Mac? My code boils down to: int main(int argc, char* argv[]) { QApplication* qtApp = new QApplication(argc, argv); MyQMainWindow mainwin; mainwin.show(); } class MyQMainWindow : public QMainWindow { //... QDockWidget* mDock; MyQWidget* mDrawArea; QStackedWidget* mCentralStack; }; MyQMainWindow::MyQMainWindow() { mDock = new QDockWidget(tr("Docked Widget"), this); mDock->setMaximumWidth(180); //... addDockWidget(Qt::RightDockWidgetArea, mDock); mDrawArea = new MyQWidget(this); mCentralStack = new QStackedWidget(); mCentralStack->addWidget(mDrawArea); // Other widgets added to stack in production code. setCentralWidget(mCentralStack); //... } (Apologies if the above isn't syntactically correct, it's just easier to illustrate than to describe.) I added the following temporary code at the end of the above constructor: HIViewRef view1 = (HIViewRef) mDrawArea->winId(); HIViewRef view2 = (HIViewRef) mDock->winId(); WindowRef win1 = HIViewGetWindow(view1); WindowRef win2 = HIViewGetWindow(view2); My problem is that view1 and view2 are different, but win1 and win2 are the same! I tried the following equivalent on Windows: HWND win1 = (HWND)(mCentralDrawArea->winId()); HWND win2 = (HWND)(mDock1->winId()); This time win1 and win2 are different. I need the window handle to pass on to a 3rd party SDK so that it can draw into the central area only. BTW, I appreciate that the winId() method comes with lots of portability warnings, but a substantial refactor is out of the question for me. The same goes for using Carbon instead of Cocoa. Thanks.

    Read the article

  • I have created a PHP script and I am lacking to extract the primary key, I have given flow below, pl

    - by Parth
    I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key Flow of my script is : I fetch out all table from DB. I check whether the table have any trigger or not. If yes then it moves to check nfor next table and so on. If it does'nt find any trigger then it creates the triggers for the table, such that, -it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made) if it has the primary key then it uses it further in creation of trigger. if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB.. Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done? please guide me...GEEKS!!!

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • Memcache error: Failed reading line from stream (0) Array

    - by daviddripps
    I get some variation of the following error when our server gets put under any significant load. I've Googled for hours about it and tried everything (including upgrading to the latest versions and clean installs). I've read all the posts about it here on SA, but can't figure it out. A lot of people are having the same problem, but no one seems to have a definitive answer. Any help would be greatly appreciated. Thanks in advance. Fatal error: Uncaught exception 'Zend_Session_Exception' with message 'Zend_Session::start() - /var/www/trunk/library/Zend/Cache/Backend/Memcached.php(Line:180): Error #8 Memcache::get() [memcache.get]: Server localhost (tcp 11211) failed with: Failed reading line from stream (0) Array We have a copy of our production environment for testing and everything works great until we start load-testing. I think the biggest object stored is about 170KB, but it will probably be about 500KB when all is said and done (well below the 1MB limit). Just FYI: Memcache gets hit about 10-20 times per page load. Here's the memcached settings: PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="" I'm running Memcache 1.4.5 with version 2.2.6 of the PHP-memcache module. PHP is version 5.2.6. memcache details from php -i: memcache memcache support = enabled Active persistent connections = 0 Version = 2.2.6 Revision = $Revision: 303962 $ Directive = Local Value = Master Value memcache.allow_failover = 1 = 1 memcache.chunk_size = 8192 = 8192 memcache.default_port = 11211 = 11211 memcache.default_timeout_ms = 1000 = 1000 memcache.hash_function = crc32 = crc32 memcache.hash_strategy = standard = standard memcache.max_failover_attempts = 20 = 20 Thanks everyone

    Read the article

  • Question about the benefit of using an ORM

    - by johnny
    I want to use an ORM for learning purposes and am try nhibernate. I am using the tutorial and then I have a real project. I can go the "old way" or use an ORM. I'm not sure I totally understand the benefit. On the one hand I can create my abstractions in code such that I can change my databases and be database independent. On the other it seems that if I actually change the database columns I have to change all my code. Why wouldn't I have my application without the ORM, change my database and change my code, instead of changing my database, orm, and code? Is it that they database structure doesn't change that much? I believe there are real benefits because ORMs are used by so many. I'm just not sure I get it yet. Thank you. EDIT: In the tutorial they have many files that are used to make the ORM work http://www.hibernate.org/362.html In the event of an application change, it seems like a lot of extra work just to say that I have "proper" abstraction layers. Because I'm new at it it doesn't look that easy to maintain and again seems like extra work, not less.

    Read the article

  • apache solr : sum of data resulted from group by

    - by Terance Dias
    Hi, We have a requirement where we need to group our records by a particular field and take the sum of a corresponding numeric field e.x. select userid, sum(click_count) from user_action group by userid; We are trying to do this using apache solr and found that there were 2 ways of doing this: Using the field collapsing feature (http://blog.jteam.nl/2009/10/20/result-grouping-field-collapsing-with-solr/) but found 2 problems with this: 1.1. This is not part of release and is available as patch so we are not sure if we can use this in production. 1.2. We do not get the sum back but individual counts and we need to sum it at the client side. Using the Stats Component along with faceted search (http://wiki.apache.org/solr/StatsComponent). This meets our requirement but it is not fast enough for very large data sets. I just wanted to know if anybody knows of any other way to achieve this. Appreciate any help. Thanks, Terance.

    Read the article

  • Webkit browsers rendering CSS different than Mozilla Firefox...Why??

    - by JAG2007
    I'm styling a form that was already marked up (made some markup changes), and I normally work in Firefox to style so I can use firebug and the web developer toolbar. On this project, I noticed that my styles are displaying quite differently for one particular area (several elements) in webkit based browsers Chrome and Safari, than in Firefox (we won't even get into Internet Explorer, although it is siding with the Firefox display). I can't figure out though why the styles are displaying so differently. Normally there is some rule that I'm neglecting that Firefox just takes for granted, and the others need it specified. But here I'm not getting why it's displaying this way. In particular I'm referring to the bottom area of the form where users can enter their contact info, then submit the form. I'll attach screen shots for reference as to the discrepancy. Here's the URL so feel free to check it out on your own. Although be advised that this is a production page (already released) so if you try out the form, you WILL BE added to CURE's contact database. http://www.helpcurenow.org/survey2010 Here's the screen shots: Firefox (the way I intend it to look) Chrome, and then Safari - strange change to submit button As a bonus, if anybody wants to help me with figuring out why on earth IE7 wants to not show the background behind the questions only, and how to fix that I would be much obliged! Thanks very much.

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >