Search Results

Search found 2100 results on 84 pages for 'disney stores'.

Page 60/84 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Why so much stack space used for each recursion?

    - by Harvey
    I have a simple recursive function RCompare() that calls a more complex function Compare() which returns before the recursive call. Each recursion level uses 248 bytes of stack space which seems like way more than it should. Here is the recursive function: void CMList::RCompare(MP n1) // RECURSIVE and Looping compare function { auto MP ne=n1->mf; while(StkAvl() && Compare(n1=ne->mb)) RCompare(n1); // Recursive call ! } StkAvl() is a simple stack space check function that compares the address of an auto variable to the value of an address near the end of the stack stored in a static variable. It seems to me that the only things added to the stack in each recursion are two pointer variables (MP is a pointer to a structure) and the stuff that one function call stores, a few saved registers, base pointer, return address, etc., all 32-bit (4 byte) values. There's no way that is 248 bytes is it? I don't no how to actually look at the stack in a meaningful way in Visual Studio 2008. Thanks

    Read the article

  • PostgreSQL: Auto-partition a table

    - by Adam Matan
    Hi, I have a huge database which holds pairs of numbers (A,B), each ranging from 0 to 10,000 and stored as floats. e.g., (1, 9984.4), (2143.44, 124.243), (0.55, 0), ... Since the PostgreSQL table which stores these pairs grew quite large, I have decided to partition it into inheriting sub-tables. I intend to create 100 such tables, each storing a range of 1000x1000. The problem is that these numbers tend to come in large chunks of nearby numbers. It means that in the future, some tables will be nearly empty and some will hold a very large portion of the database. Unfortunately, the distribution of future pairs is yet unknown. I am looking for a way to automatically repartition my table. That means that if a certain subtable holds more than a specific number of pairs, it will be automatically partitioned into four sub-sub tables, and so on. My questions are: Is recursive partitioning and inheritance possible in PostgreSQL 8.3? Will indexes and query plans understand it? What's the best way to split a subtable once it grew too large? I should point out that this isn't a live database, so a downtime of few hours every week is totally acceptable. Thanks in advance, Adam

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • choosing row (from group) with max value in a SQL Server Database

    - by sriehl
    I have a large database and am putting together a report of the data. I have aggregated and summed the data from many tables to get two tables that look like the following. id | code | value id | code | value 13 | AA | 0.5 13 | AC | 2.0 13 | AB | 1.0 14 | AB | 1.5 14 | AA | 2.0 13 | AA | 0.5 15 | AB | 0.5 15 | AB | 3.0 15 | AD | 1.5 15 | AA | 1.0 I need to get a list of id's, with the code (sumed from both tables) with the largest value. 13 | AC 14 | AA 15 | AB There are 4-6 thousand records and it is not possible to change the original tables. I'm not too worried about performance as I only need to run it a few times a year. edit: Let me see if I can explain a bit more clearly, imagine the id is the customer id, the code is who they ordered from and the value is how much they spent there. I need a list of the all the customer id's and the store that customer spent the most money at (and if they spent the same at two different stores, put a value such as 'ZZ' in for the store name).

    Read the article

  • automated email downloading and treading similar messages

    - by Michael
    Okay here it is : I have built an c# console app that downloads email, save attachments , and stores the subject, from, to, body to a MS SQL Database. I use aspNetPOP3 Component to do this. I have build a front end ASP.NET application to search and view the messages. Works great. Next Steps (this is where I need help ): Now I want my users (of the asp.net app) to reply to this message send the email to the originator, and tread any additional replies back and forth on from that original message(like basecamp). This would allow my end user not to have to log-in to a system, they just continue using email (our users can as well). The question is what should I use to determine if messages are related? Subject line I think is a bad approach. I believe the best method i've seen so far is way basecamp does it, but I'm not sure how that is done, here is a real example of the reply to address from a basecamp email (I've changed the host name): [email protected] Basecamp obviously are prefixing the pre-pending a tracking id to the email address, however , when I try this with my mail service, it's rejected. Is this the best approach, is there a way I can accomplish this, is there a better approach, or even a better email component tool? Thanks, Mike

    Read the article

  • LINQ Normalizing data

    - by Brennan Mann
    I am using an OMS that stores up to three line items per record in the database. Below is an example of an order containing five line items. Order Header Order Detail Prod 1 Prod 2 Prod 3 Order Detail Prod 4 Prod 5 One order header record and two detail records. My goal is have a one to one relation for details records(i.e., one detail record per line item). In the past, I used an UNION ALL SQL statement to extract the data. Is there a better approcach to this problem using LINQ? Below is my first attempt at using LINQ. Any feedback, suggestions or recommendations would greatly be appreciated. For what I have read, an UNION statement can tax the process? var orderdetail = (from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_001, price = o.EXTPRICES_001, qty = o.ITEMQTYS_001 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_002, price = o.EXTPRICES_002, qty = o.ITEMQTYS_002 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_003, price = o.EXTPRICES_003, qty = o.ITEMQTYS_003 });

    Read the article

  • NSKeyedArchiver on NSArray has large size overhead

    - by redguy
    I'm using NSKeyedArchiver in Mac OS X program which generates data for iPhone application. I found out that by default, resulting archives are much bigger than I expected. Example: NSMutableArray * ar = [NSMutableArray arrayWithCapacity:10]; for (int i = 0; i < 100000; i++) { NSString * s = [NSString stringWithFormat:@"item%06d", i]; [ar addObject:s]; } [NSKeyedArchiver archiveRootObject:ar toFile: @"NSKeyedArchiver.test"]; This stores 10 * 100000 = 1M bytes of useful data, yet the size of the resulting file is almost three megabytes. The overhead seems to be growing with number of items in the array. In this case, for 1000 items, the file was about 22k. "file" reports that it is a "Apple binary property list" (not the XML format). Is there an simple way to prevent this huge overhead? I wanted to use the NSKeyedArchiver for the simplicity it provides. I can write data to my own, non-generic, binary format, but that's not very elegant. Also, aggregating the data into large chunks and feeding these to the NSKeyedArchiver should work, but again, that kinda beats the point of using simple&easy&ready to use archiver. Am I missing some method call or usage pattern that would reduce this overhead?

    Read the article

  • Problem with custom Equality and GetHashCode in a mutable object

    - by Shimmy
    Hello! I am using Entity Framework in my application. I implemented with the partial class of an entity the IEquatable<T> interface: Partial Class Address : Implements IEquatable(Of Address) 'Other part generated Public Overloads Function Equals(ByVal other As Address) As Boolean _ Implements System.IEquatable(Of Address).Equals If ReferenceEquals(Me, other) Then Return True Return AddressId = other.AddressId End Function Public Overrides Function Equals(ByVal obj As Object) As Boolean If obj Is Nothing Then Return MyBase.Equals(obj) If TypeOf obj Is Address Then Return Equals(DirectCast(obj, Address)) Else Return False End Function Public Overrides Function GetHashCode() As Integer Return AddressId.GetHashCode End Function End Class Now in my code I use it this way: Sub Main() Using e As New CompleteKitchenEntities Dim job = e.Job.FirstOrDefault Dim address As New Address() job.Addresses.Add(address) Dim contains1 = job.Addresses.Contains(address) 'True e.SaveChanges() Dim contains2 = job.Addresses.Contains(address) 'False 'The problem is that I can't remove it: Dim removed = job.Addresses.Remoeve(address) 'False End Using End Sub Note (I checked in the debugger visualizer) that the EntityCollection class stores its entities in HashSet so it has to do with the GetHashCode function, I want it to depend on the ID so entities are compared by their IDs. The problem is that when I hit save, the ID changes from 0 to its db value. So the question is how can I have an equatable object, being properly hashed. Please help me find what's wrong in the GetHashCode function (by ID) and what can I change to make it work. Thanks a lot.

    Read the article

  • Prerequisites for Account management via an IPhone App?

    - by Icky
    Hello. I have been reading a couple of threads for this topic on this site. I want to create an App, which communicates with a server and has the following features: the User can create/manage an account on the server the App communicates with the server via a secure connection the User is updated about important news through messages From what I understood so far, I need to take care of the following: establish a secure connection with the server send account information(user data, password) to the server and authenticate the client side management and encryption of account data/information is handled by the server, so the App only sends data, the server stores/encrypts (no need for me to take care of anything) So far, I think, I have covered the most important features. I have read, that NSURLConnection can be used, to send the authentication data. But how is further communication ensured? And how is the encryption managed? Are there any useful tutorials on this, because this is the first time I delve into this topic, and any guidance is greatly appreciated! Also, if I have missed anything important (e.g. with managing accounts) please tell me.

    Read the article

  • Single Sign On for WebServices, WCS, WFS, WMS (Geoserver), etc.

    - by lajuette
    I'm trying to implement a Single Sign On (SSO) for a web application. Maybe you can help me find a proper solution, give me a direction or tell me, that solutions already exist. The scenario: A GeoExt (ExtJS for geodata/map based apps) webapp (JavaScript only) has a login form to let the user authenticate himself. The authentication is not implemented yet, so i'm flexible with respect to that. After authenticating to the main webapp the user indirectly accesses multiple 3rd-party services like WebServices, GeoServer WFS, Google Maps, etc. These services might require additional authentication like credentials or keys. We are (so far) not talking about additional login screens, just some kind of "machine to machine"-authentication. The main problem: I'm unable to modify the 3rd party services (e.g. Google) to add a SSO mechanism. I'd like to have a solution that allows the user to log in once to have access to all the services required. My first idea was a repository that stores all the required credentials or keys. The user logs in and can retrieve all required information to acces sthe other services. Does anybody know of existing implementations, papers, maybe implementations of such services? Other requirements: The communication between the JS application and the repository must be secure. The credentials must be stored in a secure manner. But the JS app must be able to use them to access the services (no chance to store a decryption key in a JS-app securely, eh? *g).

    Read the article

  • GAE modeling relationship options

    - by Sway
    Hi there, I need to model the following situation and I can't seem to find a consistent example on how to do it "correctly" for the google app engine. Suppose I've got a simple situation like the following: [Company] 1 ----- M [Stare] A company has one to many stores. Each store has an address made up of a address line 1, city, state, country, postcode etc. Ok. Lets say we need to create say an "Audit". An Audit is for a company and can be across one to many stares. So something like: [Audit] 1 ------ 1 [Company] 1 ------ M [Store] Now we need to query all of the "audits" based on the Store "addresses" in order to send the "Auditors" to the right locations. There seem to be numerous articles like this one: http://code.google.com/appengine/articles/modeling.html Which give examples of creating a "ContactCompany" model class. However they also say that you should use this kind of relationship only when you "really need to" and with "care" for performance. I've also read - frequently - that you should denormalize as much as possible thereby moving all of the "query-able" data into the Audit class. So what would you suggest as the best way to solve this? I've seen that there is an Expando class but I'm not sure if that is the "best" option for this. Any help or thoughts on this would be totally appreciated. Thanks in advance, Matt

    Read the article

  • excel:mysql: rs.Update crashes

    - by every_answer_gets_a_point
    i am connecting to mysql from excel and updating a table. as soon as i get to .update (rs.update) excel crashes. am i doing something wrong? Option Explicit Dim oConn As ADODB.Connection Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=pas;" & _ "Option=3" End Sub Function esc(txt As String) esc = Trim(Replace(txt, "'", "\'")) End Function Private Sub InsertData() Dim dpath, atime, rtime, lcalib, aname, rname, bstate, instrument As String Dim rs As ADODB.Recordset Set rs = New ADODB.Recordset ConnectDB With wsBooks rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable Worksheets.Item("Report 1").Select dpath = Trim(Range("B2").Text) atime = Trim(Range("B3").Text) rtime = Trim(Range("B4").Text) lcalib = Trim(Range("B5").Text) aname = Trim(Range("B6").Text) rname = Trim(Range("B7").Text) bstate = Trim(Range("B8").Text) ' instrument = GetInstrFromXML(wbBook.FullName) With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = bstate ' .Fields("instrument") = "NA" .Update ' stores the new record End With ' get the last id Set rs = oConn.Execute("SELECT @@identity", , adCmdText) 'MsgBox capture_id rs.Close Set rs = Nothing End With End Sub

    Read the article

  • How to bulk insert data from ref cursor to a temporary table in PL/SQL

    - by Sambath
    Could anyone tell me how to bulk insert data from a ref cursor to a temporary table in PL/SQL? I have a procedure that one of its parameters stores a result set, this result set will be inserted to a temporary table in another stored procedure. This is my sample code. CREATE OR REPLACE PROCEDURE get_account_list ( type_id in account_type.account_type_id%type, acc_list out sys_refcursor ) is begin open acc_list for select account_id, account_name, balance from account where account_type_id = type_id; end get_account_list; CREATE OR REPLACE PROCEDURE proc1 ( ... ) is accounts sys_refcursor; begin get_account_list(1, accounts); --How to bulk insert data in accounts to a temporary table? end proc1; In SQL Server, I can write as code below CREATE PROCEDURE get_account_list type_id int as select account_id, account_name, balance from account where account_type_id = type_id; CREATE PROCEDURE proc1 ( ... ) as ... insert into #tmp_data(account_id, account_name, balance) exec get_account_list 1 How can I write similar to the code in SQL Server? Thanks.

    Read the article

  • New Perl user: using a hash of arrays

    - by Zach H
    I'm doing a little datamining project where a perl script grabs info from a SQL database and parses it. The data consists of several timestamps. I want to find how many of a particular type of timestamp exist on any particular day. Unfortunately, this is my first perl script, and the nature of perl when it comes to hashes and arrays is confusing me quite a bit. Code segment: my %values=();#A hash of the total values of each type of data of each day. #The key is the day, and each key stores an array of each of the values I need. my @proposal; #[drafted timestamp(0), submitted timestamp(1), attny approved timestamp(2),Organiziation approved timestamp(3), Other approval timestamp(4), Approved Timestamp(5)] while(@proposal=$sqlresults->fetchrow_array()){ #TODO: check to make sure proposal is valid #Increment the number of timestamps of each type on each particular date my $i; for($i=0;$i<=5;$i++) $values{$proposal[$i]}[$i]++; #Update rolling average of daily #TODO: To check total load, increment total load on all dates between attourney approve date and accepted date for($i=$proposal[1];$i<=$proposal[2];$i++) $values{$i}[6]++; } I keep getting syntax errors inside the for loops incrementing values. Also, considering that I'm using strict and warnings, will Perl auto-create arrays of the right values when I'm accessing them inside the hash, or will I get out-of bounds errors everywhere? Thanks for any help, Zach

    Read the article

  • QT: I've inherited from QTreeView. I've inherited from QStandardItem. How do i Style the items?

    - by San Jacinto
    My Google skills must be failing me today. I've inherited from QTreeView to create a TreeView that stores a QStandardItemModel instead of a QAbstractItemModel. I have also inherited from QStandardItem to create a class to store my data in an item as is necessary. I've successfully inserted my derived QStandardItem into my derived QTreeView's QStandardItemModel. Now the trouble is, I can't figure out how to style it. I know that QTreeView has a setStyleSheet(QString) member, but I can't seem to get it working. It may be as simple as I'm not styling the correct attribute. Any pointers would be appreciated. Thanks. For clarity, here are my class defs. class SurveyTreeItem : public QStandardItem { public: SurveyTreeItem(); SurveyTreeItem( const QString & text ); ~SurveyTreeItem(); }; class StandardItemModelTreeView : public QTreeView { public: StandardItemModelTreeView(QWidget* parent = 0); ~StandardItemModelTreeView(); QStandardItemModel* getStandardItemModel(); }; I've tried the following StyleSheets: StandardTreeView::Item { font: 87 12pt 'Arial Black'; } StandardTreeView::QStandardItem { font: 87 12pt 'Arial Black'; } QTreeView::QStandardItem { font: 87 12pt 'Arial Black'; } QTreeView::Item { font: 87 12pt 'Arial Black'; } QTreeView::SurveyTreeItem { font: 87 12pt 'Arial Black'; } StandardTreeView::SurveyTreeItem { font: 87 12pt 'Arial Black'; }

    Read the article

  • CIE XYZ colorspace: is it RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0 ) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut, therefore allowing for all colors to be reproduced. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore. My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA and RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • RDP through TCP Proxy

    - by johng100
    Hi, First time in Stackoverflow and I'm hoping someone can help me. I'm looking at a proof of concept to pass RDP traffic through a TCP Proxy/tunnel which will pass through firewalls using HTTPS. The problem has to do with deploying images to machines and so it can't be assumed that the .NET framework will be present, so C++ is being used at the deployment end of a connection. The basic system I have at present is a program which listens for client connections on a port then passes any data to a WCF service which stores it as a byte array. A deployment machine (using GSoap and C++) polls the WCF service for messages and if it finds them then passes the data onto the target server process via sockets. I know this sounds horrible, but it works for simple test clients and server passing data to and from simple test client and server programs via this WCF/C++/C# proxy layer. But I have to support traffic from RDP, VNC and possibly others, so I need a transparent proxy to do this and am wondering whether the above approach is worth pursuing. I've read up on SSH tunneling and that seems a possibility. My basic question is is it possible to tunnel RDP traffic over HTTPS using custom code. Thanks John

    Read the article

  • Core-data: when accessing a relationship, the count method on NSSet fails

    - by lordsandwich
    I'm trying to access a relationship (one to many) programatically. My Data model contains an NSManagedEntity called language (with a two string attributes) with a relationship to an entity called WordCategory (one-to-many). I use an NSFetchRequest to get all the Language entities. that works fine. I get the valueForKey for the relationship and that works fine. I can work with its objects. However, when I try to send the message count to the NSSet that stores the WordCategory objects I get a In other words, this line works: NSLog(@"word category count %@",[[wordCategory anyObject] valueForKey:@"name"]); This one doesn't: NSLog(@"word category count %@",[wordCategory count] I get a the message: EXC_BAD_ACCESS in the debugger. Here's the rest of the code: NSManagedObjectContext *moc = [myAppDelegate managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; [request setEntity:[NSEntityDescription entityForName:@"Language" inManagedObjectContext:moc]]; NSError *error = nil; NSArray *results = [moc executeFetchRequest:request error: &error]; if (error) { [NSApp presentError:error]; return; } NSManagedObject *obj = [results objectAtIndex:0]; NSSet *wordCategory = [obj valueForKey:@"category"]; NSLog(@"word category count %@",[wordCategory count]); I'll appreciate any light than anybody can shed in this mystery. Thanks for your help!

    Read the article

  • Is there a way to effect user defined data types in MySQL?

    - by Dancrumb
    I have a database which stores (among other things), the following pieces of information: Hardware IDs BIGINTs Storage Capacities BIGINTs Hardware Names VARCHARs World Wide Port Names VARCHARs I'd like to be able to capture a more refined definition of these datatypes. For instance, the hardware IDs have no numerical significance, so I don't care how they are formatted when displayed. The Storage Capacities, however, are cardinal numbers and, at a user's request, I'd like to present them with thousands and decimal separators, e.g. 123,456.789. Thus, I'd like to refine BIGINT into, say ID_NUMBER and CARDINAL. The same with Hardware Names, which are simple text and WWPNs, which are hexstrings, e.g. 24:68:AC:E0. Thus, I'd like to refine VARCHAR into ENGLISH_WORD and HEXSTRING. The specific datatypes I made up are just for illustrative purposes. I'd like to keep all this information in one place and I'm wondering if anybody knows of a good way to hold this all in my MySQL table definitions. I could use the Comment field of the table definition, but that smells fishy to me. One approach would be to define the data structure elsewhere and use that definition to generate my CREATE TABLEs, but that would be a major rework of the code that I currently have, so I'm looking for alternatives. Any suggestions? The application language in use is Perl, if that helps.

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Doctesting functions that receive and display user input - Python (tearing my hair out)

    - by GlenCrawford
    Howdy! I am currently writing a small application with Python (3.1), and like a good little boy, I am doctesting as I go. However, I've come across a method that I can't seem to doctest. It contains an input(), an because of that, I'm not entirely sure what to place in the "expecting" portion of the doctest. Example code to illustrate my problem follows: """ >>> getFiveNums() Howdy. Please enter five numbers, hit <enter> after each one Please type in a number: Please type in a number: Please type in a number: Please type in a number: Please type in a number: """ import doctest numbers = list() # stores 5 user-entered numbers (strings, for now) in a list def getFiveNums(): print("Howdy. Please enter five numbers, hit <enter> after each one") for i in range(5): newNum = input("Please type in a number:") numbers.append(newNum) print("Here are your numbers: ", numbers) if __name__ == "__main__": doctest.testmod(verbose=True) When running the doctests, the program stops executing immediately after printing the "Expecting" section, waits for me to enter five numbers one after another (without prompts), and then continues. As shown below: I don't know what, if anything, I can place in the Expecting section of my doctest to be able to test a method that receives and then displays user input. So my question (finally) is, is this function doctestable?

    Read the article

  • Is it possible to convert a 40-character SHA1 hash to a 20-character SHA1 hash?

    - by ewitch
    My problem is a bit hairy, and I may be asking the wrong questions, so please bear with me... I have a legacy MySQL database which stores the user passwords & salts for a membership system. Both of these values have been hashed using the Ruby framework - roughly like this: hashedsalt = Digest::SHA1.hexdigest("--#{Time.now.to_s}--#{login}--") hashedpassword = Digest::SHA1.hexdigest("#{hashedsalt}:#{password}") So both values are stored as 40-character strings (varchar(40)) in MySQL. Now I need to import all of these users into the ASP.NET membership framework for a new web site, which uses a SQL Server database. It is my understanding that the the way I have ASP.NET membership configured, the user passwords and salts are also stored in the membership database (in table aspnet_Membership) as SHA1 hashes, which are then Base64 encoded (see here for details) and stored as nvarchar(128) data. But from the length of the Base64 encoded strings that are stored (28 characters) it seems that the SHA1 hashes that ASP.NET membership generates are only 20 characters long, rather than 40. From some other reading I have been doing I am thinking this has to do with the number of bits per character/character set/encoding or something related. So is there some way to convert the 40-character SHA1 hashes to 20-character hashes which I can then transfer to the new ASP.NET membership data table? I'm pretty familiar with ASP.NET membership by now but I feel like I'm just missing this one piece. However, it may also be known that SHA1 in Ruby and SHA1 in .NET are incompatible, so I'm fighting a losing battle... Thanks in advance for any insight.

    Read the article

  • Transaction Isolation Level of Serializable not working for me

    - by Shahriar
    I have a website which is used by all branches of a store and what it does is that it records customer purchases into a table called myTransactions.myTransactions table has a column named SerialNumber.For each purchase i create a record in the transactions table and assign a serial to it.The stored procedure that does this calls a UDF function to get a new serialNumber before inserting the record.Like below : Create Procedure mytransaction_Insert as begin insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2,Value3,...., getTransactionNSerialNumber()) end Create function getTransactionNSerialNumber as begin RETURN isnull(SELECT TOP (1) SerialNumber FROM myTransactions READUNCOMMITTED ORDER BY SerialNumber DESC),0) + 1 end The website is being used by so many users in different stores at the same time and it is creating many duplicate serialNumbers(same SerialNumbers).So i added a Sql transaction with ReadCommitted level to the transaction and i still got duplicate transaction numbers.I changed it to SERIALIZABLE in order to lock the resources and i not only got duplicate transaction numbers(!!HOW!!) but i also got sporadic deadlocks between the same stored procedure calls.This is what i tried : (With ommissions of try catch blocks and rollbacks) Create Procedure mytransaction_Insert as begin SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRASNACTION ins insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2 , Value3, ...., getTransactionNSerialNumber()) COMMIT TRANSACTION ins SET TRANSACTION ISOLATION READCOMMITTED end I even copied the function that gets the serial number directly into the stored procedure instead of the UDF function call and still got duplicate serialNumbers.So,How can a stored procedure line create something Like the c# lock() {} block. By the way,i have to implement the transaction serial number using the same pattern and i can't change the serialNumber to any other identity field or whatever.And for some reasons i need to generate the serialNumber inside the databse and i can't move SerialNumber generation to application level. Thank you.

    Read the article

  • Rails: Custom template for email "deliver_" method?

    - by neezer
    I'm building an email system that stores my different emails in the database and calls the appropriate "deliver_" method via method_missing (since I can't explicitly declare methods since they're user-generated). My problem is that my rails app still tries to render the template for whatever the generated email is, though those templates don't exist. I want to force all emails to use the same template (views/test_email.html.haml), which will be setup to draw their formatting from my database records. How can I accomplish this? I tried adding render :template => 'test_email' in the test_email method in emailer_controller with no luck. models/emailer.rb: class Emailer < ActionMailer::Base def method_missing(method, *args) # not been implemented yet logger.info "method missing was called!!" end end controller/emailer_controller.rb: class EmailerController < ApplicationController def test_email @email = Email.find(params[:id]) Emailer.send("deliver_#{@email.name}") end end views/emails/index.html.haml: %h1 Listing emails %table{ :cellspacing => 0 } %tr %th Name %th Subject - @emails.each do |email| %tr %td=h email.name %td=h email.subject %td= link_to 'Show', email %td= link_to 'Edit', edit_email_path(email) %td= link_to 'Send Test Message', :controller => 'emailer', :action => 'test_email', :params => { :id => email.id } %td= link_to 'Destroy', email, :confirm => 'Are you sure?', :method => :delete %p= link_to 'New email', new_email_path Error I'm getting with the above: Template is missing Missing template emailer/name_of_email_in_database.erb in view path app/views

    Read the article

  • priority queue with limited space: looking for a good algorithm

    - by SigTerm
    This is not a homework. I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further. looking for a priority queue algorithm that matches following requirements: queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden. Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered. O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))). (optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations) Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever. Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored. I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though). So, any other ideas?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >