Search Results

Search found 12753 results on 511 pages for 'small'.

Page 406/511 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • how to intercept processing when Session.IsNewSession is true

    - by Cen
    I have a small 4-page application for my customers to work through. They fill out information. If they let it sit too long, and the Session timeout out, I want to pop up a javascript alert that their session has expired, and that they need to start over. At that point, then redirected to the beginning page of the application. I'm getting some strange behavior. I'm stepping through code, forcing my Sessioni.IsNewSession to be true. At this point, I write out a call to Javascript to a Literal Control placed at the bottom of the . The javascript is called, and the redirection occurs. However, what is happening is.. I am pressing a button which is more or less a "Next Page" button and triggering this code. The next page is being displayed, and then the Alert and redirection occurs. The result I was expecting was to stay on the same page I received the "Timeout", with the alert to pop-up over it, then redirection. I'm checking for Session.IsNewSession in a BaseClass for these pages, overriding the OnInit event. Any ideas why I am getting this behavior? Thanks!

    Read the article

  • 'Forward-Compatible' Program Design

    - by Jeffrey Kern
    The majority of my questions I've asked here so far on StackOverflow have been how to implement individual concepts and techniques towards developing a software-based NES clone via the XNA environment. The small samples that I've thrown together on my PC work relatively great and everything. Except I hit a brick wall. How do I merge all of these samples together. Having proof-of-concept is amazing, except when you need it to go beyond just that. I now have samples strewn about that I'm trying to merge, some of them incomplete. And now I'm stuck with the chicken-and-the-egg situation of where I would like to incorporate these samples together, to make sure they work, but I cannot without test data. And I don't have tools to create test data, because they'd need to be based off of the individual pieces that need to be put together. In my mind, I'm having nightmares with circular reference. For my sample data, I am hoping to save it in XML and write a specification - and then make sample data by hand - but I'm too paranoid of manually creating an XML file full of incorrect data and blaming it on my code, or vice-versa. It doesn't help that the end-result of my work is graphic-oriented, which makes it interseting how a graphic on the screen can be visualized in XML Nodes. I guess, my question is this: What design patterns and disciplines exist in the coding world that address this type of concern? I've always relied on brute-force coding and restarting a project with a whole new code base in attempts to further along my goals, but I doubt that would be the best way to do so. Within my college career, the majority of my programming was to work on simple projects that came out of a book, or with a given correct data set and a verifyable result. I don't have that, as my own design documents that I am going by could be terribly wrong.

    Read the article

  • What is the performance penalty of XML data type in SQL Server when compared to NVARCHAR(MAX)?

    - by Piotr Owsiak
    I have a DB that is going to keep log entries. One of the columns in the log table contains serialized (to XML) objects and a guy on my team proposed to go with XML data type rather than NVARCHAR(MAX). This table will have logs kept "forever" (archiving some very old entries may be considered in the future). I'm a little worried about the CPU overhead, but I'm even more worried that DB can grow faster (FoxyBOA from the referenced question got 70% bigger DB when using XML). I have read this question http://stackoverflow.com/questions/514827/microsoft-sql-server-2005-2008-xml-vs-text-varchar-data-type and it gave me some ideas but I am particulairly interrested in clarification on whether the DB size increases or decreases. Can you please share your insight/experiences in that matter. BTW. I don't currently have any need to depend on XML features within SQL Server (there's nearly zero advantage to me in the specific case). Ocasionally log entries will be extracted, but I prefer to handle the XML using .NET (either by writing a small client or using a function defined in a .NET assembly).

    Read the article

  • Single Responsibility Principle vs Anemic Domain Model anti-pattern

    - by Niall Connaughton
    I'm in a project that takes the Single Responsibility Principle pretty seriously. We have a lot of small classes and things are quite simple. However, we have an anemic domain model - there is no behaviour in any of our model classes, they are just property bags. This isn't a complaint about our design - it actually seems to work quite well During design reviews, SRP is brought out whenever new behaviour is added to the system, and so new behaviour typically ends up in a new class. This keeps things very easily unit testable, but I am perplexed sometimes because it feels like pulling behaviour out of the place where it's relevant. I'm trying to improve my understanding of how to apply SRP properly. It seems to me that SRP is in opposition to adding business modelling behaviour that shares the same context to one object, because the object inevitably ends up either doing more than one related thing, or doing one thing but knowing multiple business rules that change the shape of its outputs. If that is so, then it feels like the end result is an Anemic Domain Model, which is certainly the case in our project. Yet the Anemic Domain Model is an anti-pattern. Can these two ideas coexist? EDIT: A couple of context related links: SRP - http://www.objectmentor.com/resources/articles/srp.pdf Anemic Domain Model - http://martinfowler.com/bliki/AnemicDomainModel.html I'm not the kind of developer who just likes to find a prophet and follow what they say as gospel. So I don't provide links to these as a way of stating "these are the rules", just as a source of definition of the two concepts.

    Read the article

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

  • OO vs Simplicity when it comes to user interaction

    - by Oetzi
    Firstly, sorry if this question is rather vague but it's something I'd really like an answer to. As a project over summer while I have some downtime from Uni I am going to build a monopoly game. This question is more about the general idea of the problem however, rather than the specific task I'm trying to carry out. I decided to build this with a bottom up approach, creating just movement around a forty space board and then moving on to interaction with spaces. I realised that I was quite unsure of the best way of proceeding with this and I am torn between two design ideas: Giving every space its own object, all sub-classes of a Space object so the interaction can be defined by the space object itself. I could do this by implementing different land() methods for each type of space. Only giving the Properties and Utilities (as each property has unique features) objects and creating methods for dealing with the buying/renting etc in the main class of the program (or Board as I'm calling it). Spaces like go and super tax could be implemented by a small set of conditionals checking to see if player is on a special space. Option 1 is obviously the OO (and I feel the correct) way of doing things but I'd like to only have to handle user interaction from the programs main class. In other words, I don't want the space objects to be interacting with the player. Why? Errr. A lot of the coding I've done thus far has had this simplicity but I'm not sure if this is a pipe dream or not for larger projects. Should I really be handling user interaction in an entirely separate class? As you can see I am quite confused about this situation. Is there some way round this? And, does anyone have any advice on practical OO design that could help in general?

    Read the article

  • is there evidence that offshoring is causing developer salaries to go down? [closed]

    - by jcollum
    I realize this is a controversial and political topic. I'm trying to decide if offshoring is something that is effecting our industry in any substantial way or if it's just some bugaboo. I've read various posts on SO about it, but none addressed the idea of evidence for offshoring. Studies, papers, opinions of people who know about such things etc. I hear a lot about offshoring and its effect on our job market. However it all seems to be hearsay and conjecture. It does seem like some people are genuinely worried about it. This offshoring thing has been going on for quite some time, should be enough time for some real data to come out. If I had to pick a number I'd say it started during the dotcom boom -- a time when the need for developers far outweighed the local talent pool. We're now in a time when the talent pool is expensive and corporate wallets are tight, seems like an ideal time to find a good cheap developer in some other country. But is that actually happening? From reading some posts here on SO, I've concluded that offshoring is a really tough thing to do right. There are a lot of companies who think (or say) they can do it right, but some small percentage of them are actually able to pull it off. Is offshoring affecting the job market in any measurable way? Is offshoring measurable at all? Do we need to stop worrying about this?

    Read the article

  • Appropriate SQL Server Permissions for Developers

    - by BJ Safdie
    After a couple of Google searches and a quick look at questions here, I cannot seem to find what I thought would be a cookbook answer for SQL Server permissions. As I often see in small shops, most developers here were using an admin account for SQL Server while developing. I want to set up roles and permissions that I can assign to developers so that we can get our jobs done, but also do so with the minimum permissions required. Can anyone offer advice on what SQL Server permissions to assign? Components: SQL Server 2008 SQL Server Reporting Services (SSRS) 2008 SQL Server Integration Services (SSIS) 2008 Platforms: Production Staging/QA Development/Integration We are running "Mixed Mode" security because of some legacy apps and networks, but are moving to Windows Auth. I am not sure if that really affects the role set up. I plan to set up access for Developers to Prod and Staging/QA DBs as Read-Only. However, I still want developers to retain the ability to run Profiling. We need Deployment accounts with higher privilege levels. We are currently trying to figure out exactly what privileges we need for SSIS package deployments. Within the Development Server, Developers need broad privileges. However, I am not sure that just making them all admins is really the best choice. It's hard to believe that no one has published a decent example script that sets up these kinds of roles with a good set of appropriate permissions for developers and deployers. We can probably figure this all out by locking things down and then adding permissions as we discover the need, but that will be way too big a PITA for everyone. Can anyone point me to, or provide, a good exemplar for permissions for these kinds of roles on these kinds of platforms?

    Read the article

  • Improving the performance of XSL

    - by Rachel
    I am using the below XSL 2.0 code to find the ids of the text nodes that contains the list of indices that i give as input. the code works perfectly but in terms for performance it is taking a long time for huge files. Even for huge files if the index values are small then the result is quick in few ms. I am using saxon9he Java processor to execute the XSL. <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each-group select="doc($insert-file)/insert-data/data" group-by="xsd:integer(@index)"> <xsl:sort select="current-grouping-key()"/> <data index="{current-grouping-key()}" text-id="{generate-id( $main-root/descendant::text()[ sum((preceding::text(), .)/string-length(.)) ge current-grouping-key() ][1] )}"> <xsl:copy-of select="current-group()/node()"/> </data> </xsl:for-each-group> </xsl:variable> In the above solution if the index value is too huge say 270962 then the time taken for the XSL to execute is 83427ms. In huge files if the index value is huge say 4605415, 4605431 it takes several minutes to execute. Seems the computation of the variable "insert-data" takes time though it is a global variable and computed only once. Should the XSL be addessed or the processor? How can i improve the performance of the XSL.

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • How to pass Event as a parameter in JQuery function

    - by Manas Saha
    Hi I am learning JQuery and I have written a small function where I am attaching a function with a button's click event. this is the head element of the HTML <script type="text/javascript"> $(pageLoaded); function pageLoaded() { $("#Button1").bind("click", { key1: "value1", key2: "value2" }, function buttonClick(event) { $("#displayArea").text(event.data.key1); } ); } </script> This is the body of the HTML <input id="Button1" type="button" value="button" /> <div id = "displayArea" style="border:2px solid black; width:300px; height:200px"> This code works fine. But when I try to write the buttonClick function outside the anonymus method, it does not work anymore. I tried to call it this way: $("#Button1").bind("click", { key1: "value1", key2: "value2" }, buttonClick(event)); function buttonClick(var event) { $("#displayArea").text(event.data.key1); } This is not working. Am I doing some mistake while passing the Event as parameter? what is the correct way to do it without using anonymous methods?

    Read the article

  • How to prevent session hijacking with SID (CGI perl)

    - by Gnippots
    I have a web app used by a small number of people (internal only) and am using a randomised sessionID that is stored under the user record and placed in various links. I have had a problem where users are sending links to each other which is allowing them to hijack the sender's session. What are some ways of preventing this from happening while still letting users send links to one another? Edit: The session ID in the link (which also contains $username) is just compared to what is stored in the User table. &incorrectLogin just prints an error followed by die; if ($sid) { $sth = $dbh->prepare("SELECT * FROM tbl_User WHERE UserID = '$username'"); $sth->execute(); $ref = $sth->fetchrow_hashref(); $session_chk = $ref->{'usr_sessionID'}; unless ($sid eq $session_chk) {&incorrectLogin;} } The problem is that if someone uses a link that is created by someone else, the page will load as them. I am not using cookies, and I recall being told in the past that CGI perl cookie handling is quite poor.

    Read the article

  • Delphi Client-Server Application using Firebird 2.5 embedded connection error

    - by Japster
    0 down vote favorite 1 share [fb] share [tw] I have got a lengthy question to ask. First of all Im still very new when it comes to Delphi programming and my experience has beem mostly developing small single user database applications using ADO and an Access database. I need to take the transition now to a client server application and this is where the problem starts. I decided to use Firebird 2.5 embeded as my database, as it is open source, and it is can be used with the interbase components in Delphi and that multiple clients can access the database simultanously. So I followed the interbase tutorial in Delphi. I managed to connect the client to the server and see the data in the example (While both are running on my pc), but when i tried to move the client to another pc, keeping the server on mine and running it to see if I can connect to the server it gave me the following error. Exception EIdSocketError in module clientDemo.exe at 0029DCAC. Socket Error # 10061 Connection refused. I understand that this might be because the host is defined as localhost in the client. But here is my first question. In the TSQLConncetion you can set die hostname under Driver-Hostname. The thing I want to know is how do you do this at run time, as I cannot get the property when I try and make an edit box to allow the user to enter the value and then set it via code like for example: SQLConncetion1.Driver.Hostname := edtHost.text; This cannot be done this way and the only way I see you can set the hostname is with the object inspector, but that is not available at runtime and I need to set the hostname on the client when the program is running the first time, so how do you set the hostname/IP address at runtime? Im using Delphi XE2 There is still a lot of questions to come especially when it comes to deployment, but I will take this piece by piece and I appreciate the advice.

    Read the article

  • Accessing a Class Member from a First-Class Function

    - by dbyrne
    I have a case class which takes a list of functions: case class A(q:Double, r:Double, s:Double, l:List[(Double)=>Double]) I have over 20 functions defined. Some of these functions have their own parameters, and some of them also use the q, r, and s values from the case class. Two examples are: def f1(w:Double) = (d:Double) => math.sin(d) * w def f2(w:Double, q:Double) = (d:Double) => d * q * w The problem is that I then need to reference q, r, and s twice when instantiating the case class: A(0.5, 1.0, 2.0, List(f1(3.0), f2(4.0, 0.5))) //0.5 is referenced twice I would like to be able to instantiate the class like this: A(0.5, 1.0, 2.0, List(f1(3.0), f2(4.0))) //f2 already knows about q! What is the best technique to accomplish this? Can I define my functions in a trait that the case class extends? EDIT: The real world application has 7 members, not 3. Only a small number of the functions need access to the members. Most of the functions don't care about them.

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • E-Commerce Security: Only Credit Card Fields Encrypted?!

    - by bizarreunprofessionalanddangerous
    I'd like your opinions on how a major bricks-and-mortar company is running the security for its shopping Web site. After a recent update, when you are logged into your shopping account, the session is now not secured. No 'https', no browser 'lock'. All the personal contact info, shopping history -- and if I'm not mistaken submit and change password -- are being sent unencrypted. There is a small frame around the credit card fields that is https. There's a little notice: "Our website is secure. Our website uses frames and because of this the secure icon will not appear in your browser" On top of this the most prominent login fields for the site are broken, and haven't gotten fixed for a week or longer (giving the distinct impression they have no clue what's going on and can't be trusted with anything). Now is it just me -- or is this simply incomprehensible for a billion dollar company, significant shopping site, in the year 2010. No lock. "We use frames" (maybe they forget "Best viewed in IE4"). Customers complaining, as you can see from their FAQ "explaining" why you aren't seeing https. I'm getting nowhere trying to convince customer service that they REALLY need to do something about this, and am about to head for the CEO. But I just want to make sure this is as BIZARRE and unprofessional and dangerous a situation as I think it is. (I'm trying to visualize what their Web technical team consists of. I'm getting A) some customer service reps who were given a 3 hour training course on Web site maintenance, B) a 14 year old boy in his bedroom masquerading as a major technical services company, C) a guy in a hut in a jungle with an e-commerce book from 1996.)

    Read the article

  • TSQL - make a literal float value

    - by David B
    I understand the host of issues in comparing floats, and lament their use in this case - but I'm not the table author and have only a small hurdle to climb... Someone has decided to use floats as you'd expect GUIDs to be used. I need to retrieve all the records with a specific float value. sp_help MyTable -- Column_name Type Computed Length Prec -- RandomGrouping float no 8 53 Here's my naive attempt: --yields no results SELECT RandomGrouping FROM MyTable WHERE RandomGrouping = 0.867153569942739 And here's an approximately working attempt: --yields 2 records SELECT RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.867153569942739 - 0.00000001 AND 0.867153569942739 + 0.00000001 -- 0.867153569942739 -- 0.867153569942739 In my naive attempt, is that literal a floating point literal? Or is it really a decimal literal that gets converted later? If my literal is not a floating point literal, what is the syntax for making a floating point literal? EDIT: Another possibility has occurred to me... it may be that a more precise number than is displayed is stored in this column. It may be impossible to create a literal that represents this number. I will accept answers that demonstrate that this is the case. EDIT: response to DVK. TSQL is MSSQLServer's dialect of SQL. This script works, and so equality can be performed deterministically between float types: DECLARE @X float SELECT top 1 @X = RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.839110948199148 - 0.000000000001 AND 0.839110948199148 + 0.000000000001 --yields two records SELECT * FROM MyTable WHERE RandomGrouping = @X I said "approximately" because that method tests for a range. With that method I could get values that are not equal to my intended value. The linked article doesn't apply because I'm not (intentionally) trying to straddle the world boundaries between decimal and float. I'm trying to work with only floats. This isn't about the non-convertibility of decimals to floats.

    Read the article

  • DataGrid calculate difference between values in two databound cells

    - by justMe
    Hello! In my small application I have a DataGrid (see screenshot) that's bound to a list of Measurement objects. A Measurement is just a data container with two properties: Date and CounterGas (float). Each Measurement object represents my gas consumption at a specific date. The list of measurements is bound to the DataGrid as follows: <DataGrid ItemsSource="{Binding Path=Measurements}" AutoGenerateColumns="False"> <DataGrid.Columns> <DataGridTextColumn Header="Date" Binding="{Binding Path=Date, StringFormat={}{0:dd.MM.yyyy}}" /> <DataGridTextColumn Header="Counter Gas" Binding="{Binding Path=ValueGas, StringFormat={}{0:F3}}" /> </DataGrid.Columns> </DataGrid> Well, and now my question :) I'd like to have another column right next to the column "Counter Gas" which shows the difference between the actual counter value and the last counter value. E.g. this additional column should calculate the difference between the value of of Feb. 13th and Feb. 6th = 199.789 - 187.115 = 15.674 What is the best way to achieve this? I'd like to avoid any calculation in the Measurement class which should just hold the data. I'd rather more like the DataGrid to handle the calculation. So is there a way to add another column that just calculates the difference between to values? Maybe using some kind of converter and extreme binding? ;D P.S.: Maybe someone with a better reputation could embed the screenshot. Thanks :)

    Read the article

  • Transform only one axis to log10 scale with ggplot2

    - by daroczig
    I have the following problem: I would like to visualize a discrete and a continuous variable on a boxplot in which the latter has a few extreme high values. This makes the boxplot meaningless (the points and even the "body" of the chart is too small), that is why I would like to show this on a log10 scale. I am aware that I could leave out the extreme values from the visualization, but I am not intended to. Let's see a simple example with diamonds data: m <- ggplot(diamonds, aes(y = price, x = color)) The problem is not serious here, but I hope you could imagine why I would like to see the values at a log10 scale. Let's try it: m + geom_boxplot() + coord_trans(y = "log10") As you can see the y axis is log10 scaled and looks fine but there is a problem with the x axis, which makes the plot very strange. The problem do not occur with scale_log, but this is not an option for me, as I cannot use a custom formatter this way. E.g.: m + geom_boxplot() + scale_y_log10() My question: does anyone know a solution to plot the boxplot with log10 scale on y axis which labels could be freely formatted with a formatter function like in this thread? Editing the question to help answerers based on answers and comments: What I am really after: one log10 transformed axis (y) with not scientific labels. I would like to label it like dollar (formatter=dollar) or any custom format. If I try @hadley's suggestion I get the following warnings: > m + geom_boxplot() + scale_y_log10(formatter=dollar) Warning messages: 1: In max(x) : no non-missing arguments to max; returning -Inf 2: In max(x) : no non-missing arguments to max; returning -Inf 3: In max(x) : no non-missing arguments to max; returning -Inf With an unchanged y axis labels:

    Read the article

  • CALayer and Off-Screen Rendering

    - by Luke Mcneice
    I have a Paging UIScrollView with a contentSize large enough to hold a number of small UIScrollViews for zooming, The viewForZoomingInScrollView is a viewController that holds a CALayer for drawing a PDF page onto. This allows me to navigate through a PDF much like the ibooks PDF reader. The code that draws the PDF (Tiled Layers) is located in: - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx; And simply adding a 'page' to the visible screen calls this method automatically. When I change page there is some delay before all the tiles are drawn, even though the object (page) has already been created. What i want to be able to do is render the next page before the user scrolls to it, thus preventing the visible tiling effect. However, i have found that if the layer is located offscreen adding it to the scrollview doesn't call the drawLayer. Any Ideas/common gotchas here? I have tried: [viewController.view.layer setNeedsLayout]; [viewController.view.layer setNeedsDisplay]; NB: The fact that this is replicating the ibooks functionally is irrelevant within the context of the full app.

    Read the article

  • Designing a general database interface in PHP

    - by lamas
    I'm creating a small framework for my web projects in PHP so I don't have to do the basic work over and over again for every new website. It is not my goal to create a second CakePHP or Codeigniter and I'm also not planning to build my websites with any of the available frameworks as I prefer to use things I've created myself in general. I have no problems in designing that framework when it comes to parts like the core structure, request handling, and so on but I'm getting stuck with designing the database interface for my modules. I've already thought about using the MVC pattern but thought that it would be a bit of a overkill. So the exact problem I'm facing is how my frameworks modules (viewCustomers could be a module, for example) should interact with the database. Is it a good idea to write SQL directly in PHP (mysql_query( 'SELECT firstname, lastname(.....))? How could I abstract a query like SELECT firstname, lastname FROM customers WHERE id=X Would MySQL helper functions like $this->db->get( array('firstname', 'lastname'), array('id'=>X) ) be a good idea? I suppose not because they actually make everything more complicated by requiring arrays to be created and passed. Is the Model pattern from MVC my only real option?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Passing IDisposable objects through constructor chains

    - by Matt Enright
    I've got a small hierarchy of objects that in general gets constructed from data in a Stream, but for some particular subclasses, can be synthesized from a simpler argument list. In chaining the constructors from the subclasses, I'm running into an issue with ensuring the disposal of the synthesized stream that the base class constructor needs. Its not escaped me that the use of IDisposable objects this way is possibly just dirty pool (plz advise?) for reasons I've not considered, but, this issue aside, it seems fairly straightforward (and good encapsulation). Codes: abstract class Node { protected Node (Stream raw) { // calculate/generate some base class properties } } class FilesystemNode : Node { public FilesystemNode (FileStream fs) : base (fs) { // all good here; disposing of fs not our responsibility } } class CompositeNode : Node { public CompositeNode (IEnumerable some_stuff) : base (GenerateRaw (some_stuff)) { // rogue stream from GenerateRaw now loose in the wild! } static Stream GenerateRaw (IEnumerable some_stuff) { var content = new MemoryStream (); // molest elements of some_stuff into proper format, write to stream content.Seek (0, SeekOrigin.Begin); return content; } } I realize that not disposing of a MemoryStream is not exactly a world-stopping case of bad CLR citizenship, but it still gives me the heebie-jeebies (not to mention that I may not always be using a MemoryStream for other subtypes). It's not in scope, so I can't explicitly Dispose () it later in the constructor, and adding a using statement in GenerateRaw () is self-defeating since I need the stream returned. Is there a better way to do this? Preemptive strikes: yes, the properties calculated in the Node constructor should be part of the base class, and should not be calculated by (or accessible in) the subclasses I won't require that a stream be passed into CompositeNode (its format should be irrelevant to the caller) The previous iteration had the value calculation in the base class as a separate protected method, which I then just called at the end of each subtype constructor, moved the body of GenerateRaw () into a using statement in the body of the CompositeNode constructor. But the repetition of requiring that call for each constructor and not being able to guarantee that it be run for every subtype ever (a Node is not a Node, semantically, without these properties initialized) gave me heebie-jeebies far worse than the (potential) resource leak here does.

    Read the article

  • How to setup Continuous Integration and Continuous Deployment for Django projects?

    - by ycseattle
    Hello, I am researching about how to set up CI and continuous deployment for a small team project for a Django based web application. Here are needs: Developer check in the code into a hosted SVN server (unfuddle.com) A CI server detects new checkin, check out the source, build, run functional tests. If tests all passed, deploy the code to the webserver on Amazon EC2. For now, the CI server is also responsible to run the functional tests. I figured out that I can use Husdon as the CI server, use Selenium to run functional tests, and use Fabric to deploy the build to remote web server in Amazon cloud. I am new to Django development and not very familiar with opensource tools. My questions are: I can find some information to integrate hudson with selenium, but I couldn't find much information on how to integrate Fabric to Hudson as well. Is this setup viable? Do you see problems? How do I integrate and deploy database changes? Most likely in the early stage we will change database schema very often with code changes. I used to use Visual Studio and the database project made it very simple to deploy. I wonder if there is "established, well-supported" way to do that. Thanks!!

    Read the article

  • Need help with a PHP proxy script

    - by blabus
    I’m currently working on a small web app that allows people to search for music album covers. However, I’m having a problem getting my PHP proxy to work. Right now I’m using a simple PHP proxy script to grab the HTML from Google’s image result page, and parse it to find the first image result (I can’t use the image search API because it doesn’t offer the parameters that I need to use). I know that the PHP script works in some way, because I’ve used it to grab XML data across domains, so I also assume that since I’m trying to grab HTML that’s what’s causing the error. So anyway, here is the site where you can see it: http://www.5byfive.net/beta/index.html Just type something into the textbox, and it will come up with the error that’s returned (I’ve placed the HTML into a file so you can see what it actually looks like here: http://www.5byfive.net/beta/error.html ). Now, I’ve tried searching for info on the web, but unfortunately I basically know nothing about PHP, so I really don’t know how to fix it, and was wondering if anyone here could lend any info or help. Thanks!

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >