Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 594/717 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Append json data to html class name

    - by user2898514
    I have a problem with my json code. I want to append each json value comes from a key to be appended to an html class name which matches the key of json data. here's my Live demo if you see the result in the life demo. it's only appending the last record. is it possible to make it show all records in order? json var json = '[{"castle":"big","commercial":"large","common":"sergio","cultural":"2009"},' + '{"castle":"big2","commercial":"large2","common":"sergio2","cultural":"20092"}]'; html <div class="castle"></div> <div class="commercial"></div> <div class="common"></div> <div class="cultural"></div> javascript var data = $.parseJSON(json); $.each(data, function(l,v) { $.each(v, function(k,o) { $('.'+k).attr('id', k+o); console.log($('#'+k+o).attr('id')); $('#'+k+o).text(o); }); }); for more illustration... I want the result in the live demo to look like this big large sergio 2009, big2 large2 sergio2 20092

    Read the article

  • How to get the size of a binary tree ?

    - by Andrei Ciobanu
    I have a very simple binary tree structure, something like: struct nmbintree_s { unsigned int size; int (*cmp)(const void *e1, const void *e2); void (*destructor)(void *data); nmbintree_node *root; }; struct nmbintree_node_s { void *data; struct nmbintree_node_s *right; struct nmbintree_node_s *left; }; Sometimes i need to extract a 'tree' from another and i need to get the size to the 'extracted tree' in order to update the size of the initial 'tree' . I was thinking on two approaches: 1) Using a recursive function, something like: unsigned int nmbintree_size(struct nmbintree_node* node) { if (node==NULL) { return(0); } return( nmbintree_size(node->left) + nmbintree_size(node->right) + 1 ); } 2) A preorder / inorder / postorder traversal done in an iterative way (using stack / queue) + counting the nodes. What approach do you think is more 'memory failure proof' / performant ? Any other suggestions / tips ? NOTE: I am probably going to use this implementation in the future for small projects of mine. So I don't want to unexpectedly fail :).

    Read the article

  • Multiple usage of MenuItems declared once (WPF)

    - by Alex Kofman
    Is it possible in WPF to define some menu structure and than use it in multiple contexts? For example I'd like to use a set of menu items from resources in ContextMenu, Window's menu and ToolBar (ToolBar with icons only, without headers). So items order, commands, icons, separators must be defined just once. I look for something like this: Declaration in resources: <MenuItem Command="MyCommands.CloneObject" CommandParameter="{Binding SelectedObject}" Header="Clone"> <MenuItem.Icon> <Image Source="Images\Clone.png" Height="16" Width="16"></Image> </MenuItem.Icon> </MenuItem> <MenuItem Command="MyCommands.RemoveCommand" CommandParameter="{Binding SelectedObject}" Header="Remove"> <MenuItem.Icon> <Image Source="Images\Remove.png" Height="16" Width="16"></Image> </MenuItem.Icon> </MenuItem> <Separator/> <MenuItem Command="MCommands.CreateChild" CommandParameter="{Binding SelectedObject}" Header="Create child"> <MenuItem.Icon> <Image Source="Images\Child.png" Height="16" Width="16"></Image> </MenuItem.Icon> </MenuItem> Usage: <ToolBar MenuItems(?)="{Reference to set of items}" ShowText(?)="false" /> and <ContextMenu MenuItems(?)="{Reference to set of items}" />

    Read the article

  • Open-sourcing a web site with active users?

    - by Lars Yencken
    I currently run several research-related web-sites with active users, and these sites use some personally identifying information about these users (their email address, IP address, and query history). Ideally I'd release the code to these sites as open source, so that other people could easily run similar sites, and more importantly scrutinise and replicate my work, but I haven't been comfortable doing so, since I'm unsure of the security implications. For example, I wouldn't want my users' details to be accessed or distributed by a third party who found some flaw in my site, something which might be easy to do with full source access. I've tried going half-way by refactoring the (Django) site into more independent modules, and releasing those, but this is very time consuming, and in practice I've never gotten around to releasing enough that a third party can replicate the site(s) easily. I also feel that maybe I'm kidding myself, and that this process is really no different to releasing the full source. What would you recommend in cases like this? Would you open-source the site and take the risk? As an alternative, would you advertise the source as "available upon request" to other researchers, so that you at least know who has the code? Or would you just apologise to them and keep it closed in order to protect users?

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • drupal themes: .info file: how do I add more than 1 css file / js file to my theme?

    - by egarcia
    I'm creating a new Drupal theme. Until now, I only needed to include a single css file and a single js file. So my theme.info file had something like this: stylesheets[all][] = css/style.css scripts[] = js/script.js Now I must include jquery and jquery-ui in order to use a calendar date. These come with 2 new javascript files, and 1 additonal css file that I must add to the site. The calendar input form is going to be used in all pages (on a side block) so it is ok for me to load the extra css/javascript on all pages. I think the easiest thing would be to reference them on the .info file itself. At first I tried to just put them there with separate spaces: stylesheets[all][] = css/style.css css/ui-lightness/jquery-ui-1.8.1.custom.css scripts[] = js/jquery-1.4.2.min.js js/jquery-ui-1.8.1.custom.min.js js/reservations.js I emptied drupal's cache and... none of them loaded. I then tried separating each file with a comma, and flushing the cache again. Same result. I've browsed some drupal pages, but could not find how to add several javascript/css files on one theme (they always seem to add just 1 of each). So, how do I include several css/javascript files on the .info file?

    Read the article

  • $().buttonset not working on dialog modal box JQUERY

    - by vikitor
    Hello everyone! I've done a dialog box that contains a form inside it, and I would like to add some jquery fancy items to it. I've been trying with $().buttonset() as I've done with most of my radio buttons in the application, in order to get a coherent IU for my application. The thing is that, even if following the rules specified, the buttons remain as a normal radio button, and not with the fancy interface. Do you know what could be the problem? This is the part of the form where I want the fancy radio buttons: <div id ="Replace"> <input type="radio" name="Replace" value = "true" id = "ReplaceYes" onclick = "setReplace(this)" /> <label for="ReplaceYes">Yes</label> <input type="radio" name="Replace" value = "false" checked="checked" id = "ReplaceNo" onclick = "setReplace(this)" /> <label for="ReplaceNo">No</label> </div> And then, as the previous part of code is in a partial view, invoked when showing the modal box, this is how I try to convert the buttons appearance: $("#Replace").buttonset(); The thing is that, debugging it I've seen that it goes through that part of the code, but it doesn't do what it's meant to do. Any clue? Thanks everyone for your attention, vikitor

    Read the article

  • Using WIndows PowerShell 1.0 or 2.0 to evaluate performance of executable files.

    - by Andry
    Hello! I am writing a simple script on Windows PowerShell in order to evaluate performance of executable files. The important hypothesisi is the following: I have an executable file, it can be an application written in any possible language (.net and not, Viual-Prolog, C++, C, everything that can be compiled as an .exe file). I want to profile it getting execution times. I did this: Function Time-It { Param ([string]$ProgramPath, [string]$Arguments) $Watch = New-Object System.Diagnostics.Stopwatch $NsecPerTick = (1000 * 1000 * 1000) / [System.Diagnostics.Stopwatch]::Frequency Write-Output "Stopwatch created! NSecPerTick = $NsecPerTick" $Watch.Start() # Starts the timer [System.Diagnostics.Process]::Start($ProgramPath, $Arguments) $Watch.Stop() # Stops the timer # Collectiong timings $Ticks = $Watch.ElapsedTicks $NSecs = $Watch.ElapsedTicks * $NsecPerTick Write-Output "Program executed: time is: $Nsecs ns ($Ticks ticks)" } This function uses stopwatch. Well, the functoin accepts a program path, the stopwatch is started, the program run and the stopwatch then stopped. Problem: the System.Diagnostics.Process.Start is asynchronous and the next instruction (watch stopped) is not executed when the application finishes. A new process is created... I need to stop the timer once the program ends. I thought about the Process class, thicking it held some info regarding the execution times... not lucky... How to solve this?

    Read the article

  • How to solve generic algebra using solver/library programmatically? Matlab, Mathematica, Wolfram etc?

    - by DevDevDev
    I'm trying to build an algebra trainer for students. I want to construct a representative problem, define constraints and relationships on the parameters, and then generate a bunch of Latex formatted problems from the representation. As an example: A specific question might be: If y < 0 and (x+3)(y-5) = 0, what is x? Answer (x = -3) I would like to encode this as a Latex formatted problem like. If $y<0$ and $(x+constant_1)(y+constant_2)=0$ what is the value of x? Answer = -constant_1 And plug into my problem solver constant_1 > 0, constant_1 < 60, constant_1 = INTEGER constant_2 < 0, constant_2 > -60, constant_2 = INTEGER Then it will randomly construct me pairs of (constant_1, constant_2) that I can feed into my Latex generator. Obviously this is an extremely simple example with no real "solving" but hopefully it gets the point across. Things I'm looking for ideally in priority order * Solve algebra problems * Definition of relationships relatively straight forward * Rich support for latex formatting (not just writing encoded strings) Thanks!

    Read the article

  • BinaryFormatter in C# a good way to read files?

    - by mr-pac
    I want to read a binary file which was created outside of my program. One obvious way in C# to read a binary file is to define class representing the file and then use a BinaryReader and read from the file via the Read* methods and assign the return values to the class properties. What I don't like with the approach is that I manually have to write code that reads the file, although the defined structure represents how the file is stored. I also have to keep the order correct when I read. After looking a bit around I came across the BinaryFormatter which can automatically serialize and deserialze object in binary format. One great advantage would be that I can read and also write the file without creating additional code. However I wonder if this approach is good for files created from other programs on not just serialized .NET objects. Take for example a graphics format file like BMP. Would it be a good idea to read the file with a BinaryFormatter or is it better to manually and write via BinaryReader and BinaryWriter? Or are there any other approaches which suit better? I'am not looking for concrete examples but just for an advice what is the best way to implement that.

    Read the article

  • How to call IronPython function from C#/F#?

    - by prosseek
    This is kind of follow up questions of http://stackoverflow.com/questions/2969194/integration-of-c-f-ironpython-and-ironruby In order to use C/C++ function from Python, SWIG is the easiest solution. The reverse way is also possible with Python C API, for example, if we have a python function as follows def add(x,y): return (x + 10*y) We can come up with the wrapper in C to use this python as follows. double Add(double a, double b) { PyObject *X, *Y, *pValue, *pArgs; double res; pArgs = PyTuple_New(2); X = Py_BuildValue("d", a); Y = Py_BuildValue("d", b); PyTuple_SetItem(pArgs, 0, X); PyTuple_SetItem(pArgs, 1, Y); pValue = PyEval_CallObject(pFunc, pArgs); res = PyFloat_AsDouble(pValue); Py_DECREF(X); Py_DECREF(Y); Py_DECREF(pArgs); return res; } How about the IronPython/C# or even F#? How to call the C#/F# function from IronPython? Or, is there any SWIG equivalent tool in IronPython/C#? How to call the IronPython function from C#/F#? I guess I could use "engine.CreateScriptSourceFromString" or similar, but I need to find a way to call IronPython function look like a C#/F# function.

    Read the article

  • How to encapsulate a WinAPI application into a C++ class

    - by Semen Semenych
    There is a simple WinAPI application. All it does currently is this: register a window class register a tray icon with a menu create a value in the registry in order to autostart and finally, it checks if it's unique using a mutex As I'm used to writing code mainly in C++, and no MFC is allowed, I'm forced to encapsulate this into C++ classes somehow. So far I've come up with such a design: there is a class that represents the application it keeps all the wndclass, hinstance, etc variables, where the hinstance is passed as a constructor parameter as well as the icmdshow and others (see WinMain prototype) it has functions for registering the window class, tray icon, reigstry information it encapsulates the message loop in a function In WinMain, the following is done: Application app(hInstance, szCmdLIne, iCmdShow); return app.exec(); and the constructor does the following: registerClass(); registerTray(); registerAutostart(); So far so good. Now the question is : how do I create the window procedure (must be static, as it's a c-style pointer to a function) AND keep track of what the application object is, that is, keep a pointer to an Application around. The main question is : is this how it's usually done? Am I complicating things too much? Is it fine to pass hInstance as a parameter to the Application constructor? And where's the WndProc? Maybe WndProc should be outside of class and the Application pointer be global? Then WndProc invokes Application methods in response to various events.

    Read the article

  • Speed of QHash lookups using QStrings as keys.

    - by Ryan R.
    I need to draw a dynamic overlay on a QImage. The component parts of the overlay are defined in XML and parsed out to a QHash<QString, QPicture> where the QString is the name (such as "crosshairs") and the QPicture is the resolution independent drawing. I then draw components of the overlay as they are needed at a position determined during runtime. Example: I have 10 pictures in my QHash composing every possible element in a HUD. During a particular frame of video I need to draw 6 of them at different positions on the image. During the next frame something has changed and now I only need to draw 4 of them but 2 of those positions have changed. Now to my question: If I am trying to do this quickly, should I redefine my QHash as QHash<int, QPicture> and enumerate the keys to counteract the overhead caused by string comparisons; or are the comparisons not going to make a very big impact on performance? I can easily make the conversion to integer keys as the XML parser and overlay composer are completely separate classes; but I would like to use a consistent data structure across the application. Should I overcome my desire for consistency and re-usability in order to increase performance? Will it even matter very much if I do?

    Read the article

  • Rails paginate existing array of ActiveRecord results

    - by SaoiseK
    Hello, I generally use will_paginate for the pagination in my app, but have hit a stumbler on my search feature. I'm using Thinking Sphinx for doing my full-text search, which returns results paginated. The problem I'm having is that after I've received the results from Thinking Sphinx, I need to merge them with some other results and re-order them. Once I've finished processing them I have an Array of results that is very different from the original from TS. As there could be 1000+ results in this Array Pagination is a necessity. The problem is that I can't figure out how to get will_paginate to play with an existing array. I've done some research and it seems the only solutions to this problem are from several years ago and are based around the old built-in Paginator class. The most recent one I could find that makes use of will_paginate was from devchix from mid-2007: http://www.devchix.com/2007/07/23/will_paginate-array/comment-page-1/ - I've given this a go but it doesn't seem to do anything for me. Are there any current methods for applying pagination (preferably via will_paginate) for existing arrays of AR results?

    Read the article

  • Inorder tree traversal in binary tree in C

    - by srk
    In the below code, I'am creating a binary tree using insert function and trying to display the inserted elements using inorder function which follows the logic of In-order traversal.When I run it, numbers are getting inserted but when I try the inorder function( input 3), the program continues for next input without displaying anything. I guess there might be a logical error.Please help me clear it. Thanks in advance... #include<stdio.h> #include<stdlib.h> int i; typedef struct ll { int data; struct ll *left; struct ll *right; } node; node *root1=NULL; // the root node void insert(node *root,int n) { if(root==NULL) //for the first(root) node { root=(node *)malloc(sizeof(node)); root->data=n; root->right=NULL; root->left=NULL; } else { if(n<(root->data)) { root->left=(node *)malloc(sizeof(node)); insert(root->left,n); } else if(n>(root->data)) { root->right=(node *)malloc(sizeof(node)); insert(root->right,n); } else { root->data=n; } } } void inorder(node *root) { if(root!=NULL) { inorder(root->left); printf("%d ",root->data); inorder(root->right); } } main() { int n,choice=1; while(choice!=0) { printf("Enter choice--- 1 for insert, 3 for inorder and 0 for exit\n"); scanf("%d",&choice); switch(choice) { case 1: printf("Enter number to be inserted\n"); scanf("%d",&n); insert(root1,n); break; case 3: inorder(root1); break; default: break; } } }

    Read the article

  • Why is PHP discriminating between .php and .abc extensions for caching?

    - by Sam
    There seems to be a problem between how PHP engine handles identical files that differ only in their file extension. Problem: "An If-Modified-Since conditional request returned the full content unchanged." Also, I measured that the .php extension loads much faster than identitcal twin with .xxx extension even though the file contents are identical, and they differ only in their file extension. "HTTP allows clients to make conditional requests to see if a copy that they hold is still valid. Since this response has a Last-Modified header, clients should be able to use an If-Modified-Since request header for validation. RED has done this and found that the resource sends a full response even though it hadn't changed, indicating that it doesn't support Last-Modified validation." homepage ending with .php exact same file, but ending .ast Given: A home.php file is copied as home.xxx and this extension is added to htaccess to recognize it as a PHP file. The .php file listen to the php.ini where freshness is set to 3 hrs, the non .php files have to listen to htaccess where freshness is set to 2 hrs according to: AddType application/x-httpd-php .php .ast .abc .xxx .etc <IfModule mod_headers.c> ExpiresActive On ExpiresDefault M2419200 Header unset ETag FileETag None Header unset Pragma Header set Cache-Control "max-age=2419200" ##### DYNAMIC PAGES <FilesMatch "\\.(ast|php|abc|xxx)$"> ExpiresDefault M7200 Header set Cache-Control "public, max-age=7200" </FilesMatch> </IfModule> So far so good and everything loads, except, the non-php file doesn't cache properly, or it does cache well but doesn't validate well, to be more specific. See images enclosed. Only the non-php file extension causes the error and loads slower. The entire page.php loads faster as somehow all the elements in there then load properly from cache, while the page.abc has the full request returned while it ought to be cached, meaning the entire page is slower. Bottom line: What should be changed, in order eliminate the If-Modified-Since conditional request returning the full content unchanged?

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • gcc -finline-functions behaviour?

    - by user176168
    I'm using gcc with the -finline-functions optimization for release builds. In order to combat code bloat because I work on an embedded system I want to say don't inline particular functions. The obvious way to do this would be through function attributes ie attribute(noinline). The problem is this doesn't seem to work when I switch on the global -finline-functions optimisation which is part of the -O3 switch. It also has something to do with it being templated as a non templated version of the same function doesn't get inlined which is as expected. Has anybody any idea of how to control inlining when this global switch is on? Here's the code: #include <cstdlib> #include <iostream> using namespace std; class Base { public: template<typename _Type_> static _Type_ fooT( _Type_ x, _Type_ y ) __attribute__ (( noinline )); }; template<typename _Type_> _Type_ Base::fooT( _Type_ x, _Type_ y ) { asm(""); return x + y; } int main(int argc, char *argv[]) { int test = Base::fooT( 1, 2 ); printf( "test = %d\n", test ); system("PAUSE"); return EXIT_SUCCESS; }

    Read the article

  • EditText items in a scrolling list lose their changes when scrolled off the screen

    - by ianww
    I have a long scrolling list of EditText items created by a SimpleCursorAdapter and prepopulated with values from an SQLite database. I make this by: cursor = db.rawQuery("SELECT _id, criterion, localweight, globalweight FROM " + dbTableName + " ORDER BY criterion", null); startManagingCursor(cursor); mAdapter = new SimpleCursorAdapter(this, R.layout.weight_edit_items, cursor, new String[]{"criterion","localweight","globalweight"}, new int[]{R.id.criterion_edit, R.id.localweight_edit, R.id.globalweight_edit}); this.setListAdapter(mAdapter); The scrolling list is several emulator screens long. The items display OK - scrolling through them shows that each has the correct value from the database. I can make an edit change to any of the EditTexts and the new text is accepted and displayed in the box. But...if I then scroll the list far enough to take the edited item off the screen, when I scroll back to look at it again its value has returned to what it was before I made the changes, ie. my edits have been lost. In trying to sort this out, I've done a getText to look at what's in the EditText after I've done my edits (and before a scroll) and getText returns the original text, even though the EditText is displaying my new text. It seems that the EditText has only accepted my edits superficially and they haven't been bound to the EditText, meaning they get dropped when scrolled off the screen. Can anyone please tell me what's going on here and what I need to do to force the EditText to retain its edits? Thanks Ian

    Read the article

  • Combining Data from two MySQL tables.

    - by Nick
    I'm trying to combine data from two tables in MySQL with PHP. I want to select all the data (id, title, post_by, content and created_at) from the "posts" table. Then I would like to select the comment_id COUNT from the "comments" table IF the comment_id equals the posts id. Finally, I would like to echo/print something on this order: <? echo $row->title; ?> Posted by <? echo $row->post_by; ?> on <? echo $row->created_at; ?> CST <? echo $row->content; ?> <? echo $row->comment_id; ?> comments | <a href="comment.php?id=<? echo $row->id; ?>">view/post comments</a> I'm uncertain as to how to "combine" the data from two tables. I have tried numerous things and have spent several evenings and have had no luck. Any help would be greatly appreciated!

    Read the article

  • How to add a WHERE clause on the second table of a 1-to-1 join in Fluent NHibernate?

    - by daddywoodland
    I'm using a legacy database that was 'future proofed' to keep track of historical changes. It turns out this feature is never used so I want to map the tables into a single entity. My tables are: CodesHistory (CodesHistoryID (pk), CodeID (fk), Text) Codes (CodeID (pk), CodeName) To add an additional level of complexity, these tables hold the content for the drop down lists throughout the application. So, I'm trying to map a Title entity (Mr, Mrs etc.) as follows: Title ClassMap - Public Sub New() Table("CodesHistory") Id(Function(x) x.TitleID, "CodesHistoryID") Map(Function(x) x.Text) 'Call into the other half of the 1-2-1 join in order to merge them in 'this single domain object Join("Codes", AddressOf AddTitleDetailData) Where("CodeName like 'C.Title.%'") End Sub ' Method to merge two tables with a 1-2-1 join into a single entity in VB.Net Public Sub AddTitleDetailData(ByVal m As JoinPart(Of Title)) m.KeyColumn("CodeID") m.Map(Function(x) x.CodeName) End Sub From the above, you can see that my 'CodeName' field represents the select list in question (C.Title, C.Age etc). The problem is that the WHERE clause only applies to the 'CodesHistory' table but the 'CodeName' field is in the 'Codes' table. As I'm sure you can guess there's no scope to change the database. Is it possible to apply the WHERE clause to the Codes table?

    Read the article

  • Storing header and data sections in a CSV file

    - by morpheous
    This should be relatively easy to do, but after several hours straight programming my mind seems a bit frazzled and could do with some help. I have a C++ class which I am currently using to store read/write data to file. I was initially using binary data, but have decided to store the data as CSV in order to let programs written in other languages be able to load the data. The C++ class looks a bit like this: class BinaryData { public: BinaryData(); void serialize(std::ostream& output) const; void deserialize(std::istream& input); private: Header m_hdr; std::vector<Row> m_rows; }; I am simply rewriting the serialize/deserialize methods to write to a CSV file. I am not sure on the "best" way to store a header section and a "data" section in a "flat" CSV file though - any suggestions on the most sensible way to do this?

    Read the article

  • How do I implement aasm in Rails 3 for what I want it to do?

    - by marcamillion
    I am a Rails n00b and have been advised that in order for me to keep track of the status of my user's accounts (i.e. paid, unpaid (and therefore disabled), free trial, etc.) I should use an 'AASM' gem. So I found one that seems to be the most popular: https://github.com/rubyist/aasm But the instructions are pretty vague. I have a Users model and a Plan model. User's model manages everything you might expect (username, password, first name, etc.). Plan model manages the subscription plan that users should be assigned to (with the restrictions). So I am trying to figure out how to use the AASM gem to do what I want to do, but no clue where to start. Do I create a new model ? Then do I setup a relationship between my User model and the model for AASM ? How do I setup a relationship? As in, a user 'has_many' states ? That doesn't seem to make much sense to me. Any guidance would be really appreciated. Thanks. Edit: If anyone else is confused by AASMs like myself, here is a nice explanation of their function in Rails by the fine folks at Envy Labs: http://blog.envylabs.com/2009/08/the-rails-state-machine/ Edit2: How does this look: include AASM aasm_column :current_state aasm_state :paid aasm_state :free_trial aasm_state :disabled #this is for accounts that have exceed free trial and have not paid #aasm_state :free_acct aasm_event :pay do transitions :to => :paid, :from => [:free_trial, :disabled] transitions :to => :disabled, :from => [:free_trial, :paid] end

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • Deploying a Rails app on an Ubuntu server using Git

    - by NudeCanalTroll
    I'm completely new to Linux, but today I find myself setting up a server (Ubuntu 10.04 LTS lucid) from scratch to host a Rails application. Anyway, I managed to get a Rails app up and running on the server itself, but I had to scrap that because I want to use Git. So I setup a git repository on the server, then pushed all the code from my local machine to the repository. Buuuut, of course Git doesn't actually store the files themselves in the repository -- all the code for my Rails app is now only on my local machine. How am I supposed to tell the server to host that? Right now my solution is to have the server use git to pull the code from its own repository. That's the code I'll host for all the world to see. In order to update the code, I guess I'll have to do something like this: Update the code on my local machine. Do some git adds, git commits, and a git push. On the server, do a git pull to update the code. So my question is, am I doing this the right way? enter code here

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >