Search Results

Search found 8380 results on 336 pages for 'manage py'.

Page 290/336 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • How do I pass a lot of parameters to views in Dango?

    - by Mark
    I'm very new to Django and I'm trying to build an application to present my data in tables and charts. Till now my learning process went very smooth, but now I'm a bit stuck. My pageview retrieves large amounts of data from a database and puts it in the context. The template then generates different html-tables. So far so good. Now I want to add different charts to the template. I manage to do this by defining <img src=".../> tags. The Matplotlib chart is generate in my chartview an returned via: response=HttpResponse(content_type='image/png') canvas.print_png(response) return response Now I have different questions: the data is retrieved twice from the database. Once in the pageview to render the tables, and again in the chartview for making the charts. What is the best way to pass the data, already in the context of the page to the chartview? I need a lot of charts, each with different datasets. I could make a chartview for each chart, but probably there is a better way. How do I pass the different dataset names to the chartview? Some charts have 20 datasets, so I don't think that passing these dataset parameters via the url (like: <imgm src="chart/dataset1/dataset2/.../dataset20/chart.png />) is the right way. Any advice?

    Read the article

  • Final classes in Python 3.x- something Guido isn't telling me?

    - by GlenCrawford
    This question is built on top of many assumptions. If one assumption is wrong, then the whole thing falls over. I'm still relatively new to Python and have just entered the curious/exploratory phase. It is my understanding that Python does not support the creating of classes that cannot be subclassed (final classes). However, it seems to me that the bool class in Python cannot be subclassed. This makes sense when the intent of the bool class is considered (because bool is only supposed to have two values: true and false), and I'm happy with that. What I want to know is how this class was marked as final. So my question is: how exactly did Guido manage to prevent subclassing of bool? >>> class TestClass(bool): pass Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> class TestClass(bool): TypeError: type 'bool' is not an acceptable base type Related question: http://stackoverflow.com/questions/2172189/why-i-cant-extend-bool-in-python

    Read the article

  • How to terminate a particular Azure worker role instance

    - by Oliver Bock
    Background I am trying to work out the best structure for an Azure application. Each of my worker roles will spin up multiple long-running jobs. Over time I can transfer jobs from one instance to another by switching them to a readonly mode on the source instance, spinning them up on the target instance, and then spinning the original down on the source instance. If I have too many jobs then I can tell Azure to spin up extra role instance, and use them for new jobs. Conversely if my load drops (e.g. during the night) then I can consolidate outstanding jobs to a few machines and tell Azure to give me fewer instances. The trouble is that (as I understand it) Azure provides no mechanism to allow me to decide which instance to stop. Thus I cannot know which servers to consolidate onto, and some of my jobs will die when their instance stops, causing delays for users while I restart those jobs on surviving instances. Idea 1: I decide which instance to stop, and return from its Run(). I then tell Azure to reduce my instance count by one, and hope it concludes that the broken instance is a good candidate. Has anyone tried anything like this? Idea 2: I predefine a whole bunch of different worker roles, with identical contents. I can individually stop and start them by switching their instance count from zero to one, and back again. I think this idea would work, but I don't like it because it seems to go against the natural Azure way of doing things, and because it involves me in a lot of extra bookkeeping to manage the extra worker roles. Idea 3: Live with it. Any better ideas?

    Read the article

  • Right way of making muti-site and multi-lingual website on codeigniter

    - by DR.GEWA
    Hi there. Beforehand let me thank you all !! Really guys you help a lot. When I will finish my web site and will have much time on watching how userbase is growing I will come here again and again to answer to another people questions(if I can ) So here is the problem. I made a web-site on CodeIgniter. A social network engine. Something like phpfox, classmates_com or facebook. It's right now somehow not multilingual, So the UI strings are in the view files, and next step will be move them to the language files. I want the user to have ability to change the language. So I assume that in database user will have row "lang_local" which would be by default set to en, and then to any other language he will change . So what is eating my nervs and enery is following. I will make on this engine several demographic social networks,and I would like to manage theese web-sites in centralized manner with one backend . So whenever I would like to make a new web-network, I just add the domain settings install the script in new folder and add it in database sites I see it like this on every table in database like users,comments,messages,categories ,etc I will have a row site_id , and on each query add/update/delete I add a WHERE SITE_ID=XXX and in table sites(site_id,site_name,domain_name) will have all domains , so that in backend I can filter data by website. Is this a good way? What if i will need then to be multiserver, what about load balancing? Who can tell me what would be a right,PROFESSIONAL way? My maximum user limit for a database is something like for start 10.000 in one-two year 100.000users

    Read the article

  • PHP Switch and Login

    - by Steve Rivera
    I'm fairly new with PHP and I am messing around with a login/registration system. I setup my sample website using a PHP-SWITCH script I found a while back: <?php switch($_GET['id']) { default: include('home.php'); /* LOGIN PAGES */ break; case "register_form": include ('includes/user_system/register_form.php'); } ? On the registration page the form links to my "register.php" which checks the validity of the form and to check for any blank fields and so on. "register.php" is supposed to refresh the page and add a reason to what the user did wrong when submitting the form. On my "register_form.php" page, which holds the actual form. This field is hidden until the user makes a mistake. <?php if (isset($reg_error)) { ?> , please try again. My "register.php" checks the form for all the errors. Here's the bit of code that will refresh the page with the reason for the error: // Check if any of the fields are missing if (empty($_POST['username']) || empty($_POST['password']) || empty($_POST['confirmpass'])) { // Reshow the form with an error $reg_error = 'One or more fields missing'; include 'register_form.php'; Now after I submit the form without any fields filled out I get the error code, but it refreshes to the actual "register_form.php". The problem with this is that because of my PHP-SWITCH script (helps me manage the site a lot easier) I don't have any formatting on that page. The actual URL to my "register_form.php" would be: "index.php?id=register_form.php". Now I have tried several different things such as changing it to: include 'index.php?id=register_form.php' And also changing it to: header(location:index.php?id=register_form.php') Unfortunately all this does is refresh the page without the reason for the error. I know this can be easily solved by just adding a Javascript Validator but I'd like to know if it is possible to refresh the page with the error using either "include" or "header()" while having a PHP-SWITCH script on the website.

    Read the article

  • php user authentication libraries / frameworks ... what are the options?

    - by es11
    I am using PHP and the codeigniter framework for a project I am working on, and require a user login/authentication system. For now I'd rather not use SSL (might be overkill and the fact that I am using shared hosting discourages this). I have considered using openID but decided that since my target audience is generally not technical, it might scare users away (not to mention that it requires mirroring of login information etc.). I know that I could write a hash based authentication (such as sha1) since there is no sensitive data being passed (I'd compare the level of sensitivity to that of stackoverflow). That being said, before making a custom solution, it would be nice to know if there are any good libraries or packages out there that you have used to provide semi-secure authentication? I am new to codeigniter, but something that integrates well with it would be preferable. Any ideas? (i'm open to criticism on my approach and open to suggestions as to why I might be crazy not to just use ssl). Thanks in advance. Update: I've looked into some of the suggestions. I am curious to try out zend-auth since it seems well supported and well built. Does anyone have experience with using zend-auth in codeigniter (is it too bulky?) and do you have a good reference on integrating it with CI? I do not need any complex authentication schemes..just a simple login/logout/password-management authorization system. Also, dx_auth seems interesting as well, however I am worried that it is too buggy. Has anybody else had success with this? I realized that I would also like to manage guest users (i.e. users that do not login/register) in a similar way to stackoverflow..so any suggestions that have this functionality would be great

    Read the article

  • How do I find Microsoft APIs?

    - by Stephen
    I'm a java programmer, and if I see something that: I don't know about or just want to find a method description without opening an ide or am on support I type java [classname] into google, and there it is. If I try this crazy stunt for C# I'll come up with a whole heap of tutorials (how do I use it etc). If I manage to get to MSDN, I have to wade through a page describing every .net technology to see how their syntax references the same object, and then I have to find the appropriate page from there ([class name] Constructor) for example. This is even more pronounced, because I don't have Visual Studio, so I've got nothing to make it easier. There must be something I'm missing or don't know... how does this situation work for Microsoft developers? how can I make my life easier/searches better? are there techniques that work no matter what computer I'm on (e.g. require no computer setup/downloads) Notes It could be thought that java is just "java", but it's just that the java apis are only referenced/defined in the core language. For all the other languages on the JVM, it's assumed that you will just learn the correct syntax to use the java apis. I presume that .Net only lists a whole heap of languages as the api classes are actually different and have different interfaces capabilities (or some approximation of this presumption). Edit While searching msdn works... in the java space I can type 'java [anyclass]' and it will generally be found... whether it's a java core api or a third party library

    Read the article

  • Polymorphic Queue

    - by metdos
    Hello Everyone, I'm trying to implement a Polymorphic Queue. Here is my trial: QQueue <Request *> requests; while(...) { QString line = QString::fromUtf8(client->readLine()).trimmed(); if(...)){ Request *request=new Request(); request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); //this initialize variables in request using tcpMessage if(request->requestType==REQUEST_LOGIN){ LoginRequest loginRequest; request=&loginRequest; request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); requests.enqueue(request); } //Here pointers in "requests" do not point to objects I created above, and I noticed that their destructors are also called. LoginRequest *loginRequest2=dynamic_cast<LoginRequest *>(requests.dequeue()); loginRequest2->decodeFromTcpMessage(); } } Unfortunately, I could not manage to make work Polymorphic Queue with this code because of the reason I mentioned in second comment.I guess, I need to use smart-pointers, but how? I'm open to any improvement of my code or a new implementation of polymorphic queue. Thanks.

    Read the article

  • [iphone,twitter] Accessing the Twitter API through a proxy using NSURLConnectionsm, OAuth problem

    - by akaii
    I'm having no problems with sending an update directly via hxxps://api.twitter.com/, but the app (for the Iphone, I'm using NSURLConnections) I'm working is supposed to allow the user to select a preferred proxy (e.g. hxxps://twitter-proxy.appspot.com/api/ or hxxps://nest.onedd.net/api/), and I keep getting a 401 error (Failed to validate oauth signature and token) whenever I try to get an access token via these proxies. Even though I send my POST request to the proxy, I am still using the direct url for the api (https:// api.twitter.com/[rest api path]) in the base string. Despite the 401 error message above, the status code I'm actually getting from connection:didReceiveResponse: is 200, probably because it was able to successfully contact the proxy... Is there anything else that I need to consider when using a proxy to access the API? Should anything in the authorization header change, for example? Or the base string? I can manage to connect via Basic Auth without issue, but support for that will be dropped in a month. On a somewhat unrelated note... What are the possible causes of Twitter's error 403, and how do you distinguish between them? Is the only way to differentiate an error due to exceeding the status update limit for an hour (150 per hour) vs for a day (1000 per day) by checking the string reply returned in the response? Is there any way for me to simulate a status update limit error without going through the motions of actually sending 150/1000 tweets?

    Read the article

  • Extand legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on iis with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • transactions and delete using fluent nhibernate

    - by Will I Am
    I am starting to play with (Fluent) nHibernate and I am wondering if someone can help with the following. I'm sure it's a total noob question. I want to do: delete from TABX where name = 'abc' where table TABX is defined as: ID int name varchar(32) ... I build the code based on internet samples: using (ITransaction transaction = session.BeginTransaction()) { IQuery query = session.CreateQuery("FROM TABX WHERE name = :uid") .SetString("uid", "abc"); session.Delete(query.List<Person>()[0]); transaction.Commit(); } but alas, it's generating two queries (one select and one delete). I want to do this in a single statement, as in my original SQL. What is the correct way of doing this? Also, I noticed that in most samples on the internet, people tend to always wrap all queries in transactions. Why is that? If I'm only running a single statement, that seems an overkill. Do people tend to just mindlessly cut and paste, or is there a reason beyond that? For example, in my query above, if I do manage it to get it from two queries down to one, i should be able to remove the begin/commit transaction, no? if it matters, I'm using PostgreSQL for experimenting.

    Read the article

  • Change the content type of a pop up window.

    - by Oscar Reyes
    This question brought a new one: I have a html page and I need it to change the content type when the user press "save" button so the browser prompt to save the file to disk I've been doing this in the server side to offer "excel" versions of the page ( which is basically a html table ) <c:if test="${page.asExcelAction}"> <% response.setContentType("application/vnd.ms-excel"); %> What I'm trying to do now is to do the same, but in the client side with javacript but I can't manage to do it so. This is what I've got so far: <html> <head> <script> function saveAs(){ var sMarkup = document.getElementById('content').innerHTML; //var oNewDoc = document.open('application/vnd.ms-excel'); var oNewDoc = document.open('text/html'); oNewDoc.write( sMarkup ); oNewDoc.close(); } </script> </head> <body> <div id='content'> <table> <tr> <td>Stack</td> <td>Overflow</td> </tr> </table> </div> <input type="button" value="Save as" onClick="saveAs()"/> </body> </html>

    Read the article

  • Bidirectional replication update record problem

    - by Mirek
    Hi, I would like to present you my problem related to SQL Server 2005 bidirectional replication. What do I need? My teamleader wants to solve one of our problems using bidirectional replication between two databases, each used by different application. One application creates records in table A, changes should replicate to second database into a copy of table A. When data on second server are changed, then those changes have to be propagated back to the first server. I am trying to achieve bidirectional transactional replication between two databases on one server, which is running SQL Server 2005. I have manage to set this up using scripts, established 2 publications and 2 read only subscriptions with loopback detection. Distributtion database is created, publishment on both databases is enabled. Distributor and publisher are up. We are using some rules to control, which records will be replicated, so we need to call our custom stored procedures during replication. So, articles are set to use update, insert and delete custom stored procedures. So far so good, but? Everything works fine, changes are replicating, until updates are done on both tables simultaneously or before changes are replicated (and that takes about 3-6 seconds). Both records then end up with different values. UPDATE db1.dbo.TestTable SET Col = 4 WHERE ID = 1 UPDATE db2.dbo.TestTable SET Col = 5 WHERE ID = 1 results to: db1.dbo.TestTable COL = 5 db2.dbo.TestTable COL = 4 But we want to have last change winning replication. Please, is there a way to solve my problem? How can I ensure same values in both records? Or is there easier solution than this kind of replication? I can provide sample replication script which I am using. I am looking forward for you ideas, Mirek

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • How do I get ASP.NET login status controls to display a Log In option?

    - by Greg McNulty
    I have the following log in status controls on the top of my master page. It displays the logged in as, manager log in, and Log out options. However, when a user is not logged in, there is nothing displayed there. When the user is NOT logged in, is there a way to display a "Login" text link that takes you to the log in page and then "disappears" once the user is logged in? Any help is appreciated. Thanks! <asp:LoginName ID="LoginName1" runat="server" FormatString="Logged in as {0}" ForeColor="Aqua" /> <asp:LoginView ID="LoginView1" runat="server"> <RoleGroups> <asp:RoleGroup Roles="Managers"> <ContentTemplate> <asp:HyperLink ID="HyperLink1" runat="server" NavigateUrl="~/Management/management.aspx">Manage Site</asp:HyperLink> or <asp:LoginStatus id="LoginStatus1" runat="server" /> </ContentTemplate> </asp:RoleGroup> </RoleGroups> <LoggedInTemplate> (<asp:LoginStatus id="LoginStatus1" runat="server" />) </LoggedInTemplate> </asp:LoginView> ASP.NET 3.5 VWD 2008 C#

    Read the article

  • The rightCalloutAccessory button is not shown

    - by Luca
    I try to manage annotations, and to display an info button on the right of the view when a PIN get selected, my relevant code is this: - (MKAnnotationView *)mapView:(MKMapView *)map viewForAnnotation:(id <MKAnnotation>)annotation { MKPinAnnotationView *newAnnotation = [[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:@"greenPin"]; if ([annotation isKindOfClass:[ManageAnnotations class]]) { static NSString* identifier = @"ManageAnnotations"; MKPinAnnotationView *newAnnotation = [[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:identifier]; if (newAnnotation==nil) { newAnnotation=[[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:identifier]; }else { newAnnotation.annotation=annotation; } newAnnotation.pinColor = MKPinAnnotationColorGreen; newAnnotation.animatesDrop = YES; newAnnotation.canShowCallout = YES; newAnnotation.rightCalloutAccessoryView=[UIButton buttonWithType:UIButtonTypeInfoLight]; return newAnnotation; }else { newAnnotation.pinColor = MKPinAnnotationColorGreen; newAnnotation.animatesDrop = YES; newAnnotation.canShowCallout = YES; return newAnnotation; } ManageAnnotations.m : @implementation ManageAnnotations @synthesize pinColor; @synthesize storeName=_storeName; @synthesize storeAdress=_storeAdress; @synthesize coordinate=_coordinate; -(id)initWithTitle:(NSString*)storeName adress:(NSString*)storeAdress coordinate:(CLLocationCoordinate2D)coordinate{ if((self=[super init])){ _storeName=[storeName copy]; _storeAdress=[storeAdress copy]; _coordinate=coordinate; } return self; } -(NSString*)title{ return _storeName; } -(NSString*)subtitle{ return _storeAdress; } ManageAnnotations.h @interface ManageAnnotations : NSObject<MKAnnotation>{ NSString *_storeName; NSString *_storeAdress; CLLocationCoordinate2D _coordinate; } // @property(nonatomic,assign)MKPinAnnotationColor pinColor; @property(nonatomic, readonly, copy)NSString *storeName; @property(nonatomic, readonly, copy)NSString *storeAdress; @property(nonatomic,readonly)CLLocationCoordinate2D coordinate; // -(id)initWithTitle:(NSString*)storeName adress:(NSString*)storeAdress coordinate:(CLLocationCoordinate2D)coordinate; // The PINS are shown correctly on the Map, but without the info button on the right of the view. Am i missing something?

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • How to use generic (NSObject) controller with subviews of a UIViewController?

    - by wanderlust
    I have a UIViewController that is loading several subviews at different times based on user interaction. I originally built all of these subviews in code, with no nib files. Now I am moving to nib files with custom UIView subclasses. Some of these subviews display static data, and I am using loadNibNamed:owner:options: to load them into the view controller. Others contain controls that I need to access. I (sort of) understand the reasons Apple says to use one view controller per screen of content, using generic controller objects (NSObjects) to manage subsections of a screen. So I need a view controller, a generic controller, a view class and a nib. How do I put this all together? My working assumptions and subsequent questions: I will associate the view class with the nib in the 'class identity' drop down in IB. The view controller will coordinate overall screen interactions. When necessary, it will create an instance of the generic controller. Does the generic controller load the nib? How? Do I define the outlets and actions in that view class, or should they be in the generic controller? How do I pass messages between the view controller and the generic controller? If anyone can point me to some sample code using a controller in this way, it will go a long way to helping me understand. None of the books or stackoverflow posts I've read have quite hit the spot yet.

    Read the article

  • Custom data forms in CakePHP

    - by Affian
    I'm building a controller to manage group based ACL in CakePHP and when I create or edit a group I want to be able to select what permissions it has. The group data table only stores a group ID and a group Name as the permissions are stored in the ACO/ARO table. I have an array from the ACO that I want to turn into a set of checkboxes so you can check them to allow access from that group to that ACO. So first off, how do I turn this array into a set of checkboxes. The array looks like this: array( [0] => array( [Aco] => array( [alias] => 'alias' [id] => 1) [children] => array ( [0] => array( [Aco]=> ...etc )) [1] => array( ...etc ) My next question is how can I check these once the form has been submitted to the controller to allow the selected actions? [Update] Ok changing the angle of my question, how can I use the Form helper to create forms that are not based on any Model?

    Read the article

  • Using game of life or other virtual environment for artificial (intelligence) life simulation? [clos

    - by Berlin Brown
    One of my interests in AI focuses not so much on data but more on biologic computing. This includes neural networks, mapping the brain, cellular-automata, virtual life and environments. Described below is an exciting project that includes develop a virtual environment for bots to evolve in. "Polyworld is a cross-platform (Linux, Mac OS X) program written by Larry Yaeger to evolve Artificial Intelligence through natural selection and evolutionary algorithms." http://en.wikipedia.org/wiki/Polyworld " Polyworld is a promising project for studying virtual life but it still is far from creating an "intelligent autonomous" agent. Here is my question, in theory, what parameters would you use create an AI environment? Possibly a brain environment? Possibly multiple self contained life organisms that have their own "brain" or life structures. I would like a create a spin on the game of life simulation. What if you have a 64x64 game of life grid. But instead of one grid, you might have N number of grids. The N number of grids are your "life force" If all of the game of life entities die in a particular grid then that entire grid dies. A group of "grids" makes up a life form. I don't have an immediate goal. First, I want to simulate an environment and visualize what is going on in the environment with OpenGL and see if there are any interesting properties to the environment. I then want to add "scarce resources" and see if the AI environment can manage resources adequately.

    Read the article

  • Prevent comment form re-submission

    - by Rob
    I'm got a comment form for an article and i'd to prevent re-submission. I notice that Worpdress handles this very well (going back doesn't cause the browser to request a form re-submission), but I can't figure out how they do it, even though our methods are very similar. My Script User visits mydomain.com/article/1/article_title.html Fills in a form which posts to mydomain.com/addnewcomment/1.html I then do a 302 redirect back to mydomain.com/article/1/article_title.html Now if I press back from this position it doesn't request a redirect. However, if I go to another page e.g. mydomain.com/tag/1/my_tag.html and press back it does resubmit the form. Obviously I want to prevent this. What Wordpress does User visits mydomain.com/?p=1 Fills in a form which posts to mydomain.com/wp-comments-post.php This then does a 302 redirect back to mydomain.com/?p=1 Pressing back or visiting another page and pressing back doesn't cause a re-submission. I've had a look through the WP code but I can't see how they manage this. Obviously it's something i'd like to achieve. Does anyone have any thoughts on where I may be going wrong? (I'm only using Wordpress as an example to prove that it's possible, obviously i'm not trying to exactly duplicate WP, that would be pointless)

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • How to handle authenticated user access to resources in document oriented system?

    - by Jeremy Raymond
    I'm developing a document oriented application and need to manage user access to the documents. I have a module that handles user authentication, and another module that handles document CRUD operations on the data store. Once a user is authenticated I need to enforce what operations the user can and cannot perform to documents based upon the user's permissions. The best option I could think of to integrate these two pieces together would be to create another module that duplicates the data API but that also takes the authenticated user as a parameter. The module would delegate the authorization check to the auth module and delegate the document operation to the data access module. Something like: -module(auth_data_access). % User is authenticated (logged into the system) % save_doc validates if user is allowed to save the given document and if so % saves it returning ok, else returns {error, permission_denied} save_doc(Doc, User) -> case auth:save_allowed(Doc, User) of ok -> data_access:save_doc(Doc); denied -> {error, permission_denied} end end. Is there a better way I can handle this?

    Read the article

  • rails semi-complex STI with ancestry data model planning the routes and controllers

    - by ere
    I'm trying to figure out the best way to manage my controller(s) and models for a particular use case. I'm building a review system where a User may build a review of several distinct types with a Polymorphic Reviewable. Country (has_many reviews & cities) Subdivision/State (optional, sometimes it doesnt exist, also reviewable, has_many cities) City (has places & review) Burrow (optional, also reviewable ex: Brooklyn) Neighborhood (optional & reviewable, ex: williamsburg) Place (belongs to city) I'm also wondering about adding more complexity. I also want to include subdivisions occasionally... ie for the US, I might add Texas or for Germany, Baveria and have it be reviewable as well but not every country has regions and even those that do might never be reviewed. So it's not at all strict. I would like it to as simple and flexible as possible. It'd kinda be nice if the user could just land on one form and select either a city or a country, and then drill down using data from say Foursquare to find a particular place in a city and make a review. I'm really not sure which route I should take? For example, what happens if I have a Country, and a City... and then I decide to add a Burrow? Could I give places tags (ie Williamsburg, Brooklyn) belong_to NY City and the tags belong to NY? Tags are more flexible and optionally explain what areas they might be in, the tags belong to a city, but also have places and be reviewable? So I'm looking for suggestions for anyone who's done something related. Using Rails 3.2, and mongoid.

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >