Search Results

Search found 912 results on 37 pages for 'massive'.

Page 28/37 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Why is Reporting Services report vastly slower than its query?

    - by Telos
    I have a query that takes roughly 2 minutes to run. It's not terribly complex in terms of parameters or anything, and the report itself doesn't do any truly extensive processing. Basically just spits the data straight out in a nice format. (Actually one of the reports doesn't format the data at all, just returns a flat table meant to be manipulated in excel.) It's not returning a massive set of data either. Yet the report takes upwards of 30 minutes to run. What could cause this? This is SSRS 2005 against a SQL 2005 database btw. EDIT: OK, I found that with the addition of WITH (NOLOCK) in the report it takes the same time as the query does through SSMS. Why would the query be handled differently if it's coming from reporting services (or visual studio on my local machine) than if coming from SSMS on my local machine? I saw the query running in Activity Monitor a couple times in SLEEP_WAIT mode, but not blocked by anything... EDIT2: The connection string is: Data Source=SERVERNAME;Initial Catalog=DBName

    Read the article

  • Why do Asp.net timers/updatepanels leak memory and can it be fixed/worked around?

    - by KallDrexx
    I have built a suite of internal websites for our company to manage some of our processes. I have been noticing that these pages have massive memory leaks that cause the pages to be using well over 150mb of memory, which is ridiculous for a webpage that consists of a single form and a GridView that is displaying 7-10 rows of data at a time, sometimes with the data not changing for a whole day. This data does need to be refreshed on a semi-regular basis so that we always see the latest results and can act on them. After some testing it appears that the memory leak is extremely easy to reproduce, and very noticeable. I created a page with the following asp.net markup: <body> <form id="form1" runat="server"> <div> <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager> <asp:Timer ID="timer1" runat="server" Interval="1000" /> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> There is absolutely no code behind for this. This is the entirety of the page. Running this site in Chrome shows the memory usage shoot up to 25 megs in the span of 20-30 seconds. Leaving it running for a few minutes makes the memory go up to the 70 megs and such. Am I using timers and update panels wrong, or is this a pure Asp.net issue with no work around?

    Read the article

  • How to express inter project dependencies in Eclipse PDE

    - by Roland Tepp
    I am looking for the best practice of handling inter project dependencies between mixed project types where some of the projects are eclipse plug-in/OSGI bundle projects (an RCP application) and others are just plain old java projects (web services modules). Few of the eclipse plug-ins have dependencies on Java projects. My problem is that at least as far as I've looked, there is no way of cleanly expressing such a dependency in Eclipse PDE environment. I can have plug-in projects depend on other plug-in projects (via Import-Package or Require-Bundle manifest headers), but not of the plain java projects. I seem to be able to have project declare a dependency on a jar from another project in a workspace, but these jar files do not get picked up by neither export nor launch configuration (although, java code editing sees the libraries just fine). The "Java projects" are used for building services to be deployed on an J2EE container (JBoss 4.2.2 for the moment) and produce in some cases multiple jar's - one for deploying to the JBoss ear and another for use by client code (an RCP application). The way we've "solved" this problem for now is that we have 2 more external tools launcher configurations - one for building all the jar's and another for copying these jar's to the plug-in projects. This works (sort of), but the "whole build" and "copy jars" targets incur quite a large build step, bypassing the whole eclipse incremental build feature and by copying the jars instead of just referencing the projects I am decoupling the dependency information and requesting quite a massive workspace refresh that eats up the development time like it was candy. What I would like to have is a much more "natural" workspace setup that would manage dependencies between projects and request incremental rebuilds only as they are needed, be able to use client code from service libraries in an RCP application plug-ins and be able to launch the RCP application with all the necessary classes where they are needed. So can I have my cake and eat it too ;) NOTE To be clear, this is not so much about dependency management and module management at the moment as it is about Eclipse PDE configuration. I am well aware of products like [Maven], [Ivy] and [Buckminster] and they solve a quite different problem (once I've solved the workspace configuration issue, these products can actually come in handy for materializing the workspace and building the product)

    Read the article

  • Method for finding memory leak in large Java heap dumps

    - by Rickard von Essen
    I have to find a memory leak in a Java application. I have some experience with this but would like advice on a methodology/strategy for this. Any reference and advice is welcome. About our situation: Heap dumps are larger than 1 GB We have heap dumps from 5 occasions. We don't have any test case to provoke this. It only happens in the (massive) system test environment after at least a weeks usage. The system is built on a internally developed legacy framework with so many design flaws that they are impossible to count them all. Nobody understands the framework in depth. It has been transfered to one guy in India who barely keeps up with answering e-mails. We have done snapshot heap dumps over time and concluded that there is not a single component increasing over time. It is everything that grows slowly. The above points us in the direction that it is the frameworks homegrown ORM system that increases its usage without limits. (This system maps objects to files?! So not really a ORM) Question: What is the methodology that helped you succeed with hunting down leaks in a enterprise scale application?

    Read the article

  • What is the most platform- and Python-version-independent way to make a fast loop for use in Python?

    - by Statto
    I'm writing a scientific application in Python with a very processor-intensive loop at its core. I would like to optimise this as far as possible, at minimum inconvenience to end users, who will probably use it as an uncompiled collection of Python scripts, and will be using Windows, Mac, and (mainly Ubuntu) Linux. It is currently written in Python with a dash of NumPy, and I've included the code below. Is there a solution which would be reasonably fast which would not require compilation? This would seem to be the easiest way to maintain platform-independence. If using something like Pyrex, which does require compilation, is there an easy way to bundle many modules and have Python choose between them depending on detected OS and Python version? Is there an easy way to build the collection of modules without needing access to every system with every version of Python? Does one method lend itself particularly to multi-processor optimisation? (If you're interested, the loop is to calculate the magnetic field at a given point inside a crystal by adding together the contributions of a large number of nearby magnetic ions, treated as tiny bar magnets. Basically, a massive sum of these.) # calculate_dipole # ------------------------- # calculate_dipole works out the dipole field at a given point within the crystal unit cell # --- # INPUT # mu = position at which to calculate the dipole field # r_i = array of atomic positions # mom_i = corresponding array of magnetic moments # --- # OUTPUT # B = the B-field at this point def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) #4pi / mu0 (at the front of the dipole eqn) A = 1e-7 #initalise dipole field B = zeros(3,float) for i in range(len(relative)): #work out the dipole field and add it to the estimate so far B += A*(3*dot(mom_i[i],r_unit[i])*r_unit[i] - mom_i[i]) / sqrt(dot(relative[i],relative[i]))**3 return B

    Read the article

  • drupal_add_css not working

    - by hfidgen
    Hiya, I need to use drupal_add_css to call stylesheets onto single D6 pages. I don't want to edit the main theme stylesheet as there will be a set of individual pages which all need completely new styles - the main sheet would be massive if i put it all in there. My solution was to edit the page in PHP editor mode and do this: <?php drupal_add_css("/styles/file1.css", "theme"); ?> <div id="newPageContent">stuff here in html</div> But when i view source, there is nothing there! Not even a broken css link or anything, it's just refusing to add the css sheet to the css package put into the page head. Variations don't seem to work either: drupal_add_css($path = '/styles/file1.css', $type = 'module', $media = 'all', $preprocess = TRUE) My template header looks like this, i've not changed anything from the default other than adding a custom JS. <head> <?php print $head ?> <title><?php print $head_title ?></title> <?php print $styles ?> <?php print $scripts ?> <script type="text/javascript" src="<?php print base_path() ?>misc/askme.js"></script> <!--[if lt IE 7]> <?php print phptemplate_get_ie_styles(); ?> <![endif]--> </head> Can anyone think of a reason why this function is not working? Thanks!

    Read the article

  • Free solution for automatic updates with a .NET/C# app?

    - by a2h
    Yes, from searching I can see this has been asked time and time again. Here's a backstory. I'm an individual hobbyist developer with zero budget. A program I've been developing has been in need of constant bugfixes, and me and users are getting tired of having to manually update. Me, because my current solution of Manually FTP to my website Update a file "newest.txt" with the newest version Update index.html with a link to the newest version Hope for people to see the "there's an update" message Have them manually download the update sucks, and whenever I screw up an update, I get pitchforks. Users, because, well, "Are you ever going to implement auto-update?" "Will there ever be an auto-update feature?" Over the past I have looked into: WinSparkle - No in-app updates, and the DLL is 500 KB. My current solution is a few KBs in the executable and has no in-app updates. http://windowsclient.net/articles/appupdater.aspx - I can't comprehend the documentation http://www.codeproject.com/KB/vb/Auto_Update_Revisited.aspx - Doesn't appear to support anything other than working with files that aren't in use wyUpdate - wyBuild isn't free, and the file specification is simply too complex. Maybe if I was under a company paying me I could spend the time, but then I may as well pay for wyBuild. http://www.kineticjump.com/update/default.aspx - Ditto the last sentence. ClickOnce - Workarounds for implementing launching on startup are massive, horrendous and not worth it for such a simple feature. Publishing is a pain; manual FTP and replace of all files is required for servers without FrontPage Extensions. I'm pretty much ready to throw in the towel right now and strangle myself. And then I think about Sparkle... EDIT: I came across SparkleDotNET just then. Looks good, though the DLL is 200 KB. Don't know if that's really that big of an issue, though.

    Read the article

  • Read a buffer of unknown size (Console input)

    - by Sanarothe
    Hi. I'm a little behind in my X86 Asm class, and the book is making me want to shoot myself in the face. The examples in the book are insufficient and, honestly, very frustrating because of their massive dependencies upon the author's link library, which I hate. I wanted to learn ASM, not how to call his freaking library, which calls more of his library. Anyway, I'm stuck on a lab that requires console input and output. So far, I've got this for my input: input PROC INVOKE ReadConsole, inputHandle, ADDR buffer, Buf - 2, ADDR bytesRead, 0 mov eax,OFFSET buffer Ret input EndP I need to use the input and output procedures multiple times, so I'm trying to make it abstract. I'm just not sure how to use the data that is set to eax here. My initial idea was to take that string array and manually crawl through it by adding 8 to the offset for each possible digit (Input is integer, and there's a little bit of processing) but this doesn't work out because I don't know how big the input actually is. So, how would you swap the string array into an integer that could be used? Full code: (Haven't done the integer logic or the instruction string output because I'm stuck here.) include c:/irvine/irvine32.inc .data inputHandle HANDLE ? outputHandle HANDLE ? buffer BYTE BufSize DUP(?),0,0 bytesRead DWORD ? str1 BYTE "Enter an integer:",0Dh, 0Ah str2 BYTE "Enter another integer:",0Dh, 0Ah str3 BYTE "The higher of the two integers is: " int1 WORD ? int2 WORD ? int3 WORD ? Buf = 80 .code main PROC call handle push str1 call output call input push str2 call output call input push str3 call output call input main EndP larger PROC Ret larger EndP output PROC INVOKE WriteConsole Ret output EndP handle PROC USES eax INVOKE GetStdHandle, STD_INPUT_HANDLE mov inputHandle,eax INVOKE GetStdHandle, STD_INPUT_HANDLE mov outputHandle,eax Ret handle EndP input PROC INVOKE ReadConsole, inputHandle, ADDR buffer, Buf - 2, ADDR bytesRead, 0 mov eax,OFFSET buffer Ret input EndP END main

    Read the article

  • Vimeo Desktop App OAuth

    - by Barry
    Hi Guys, I'm currently having massive trouble with Vimeo's Oauth implementation and my desktop app. My program does the following correctly. 1- Requests a Unauthorized Request Token with my key and secret and returns - a Token and a Token secret. 2- Generates a URL for the user to go to using the token which then shows our application's name and allows the user to Authorize us to use his/her account. It then shows a verifier which the user returns and puts into our app. The problem is the third step and actually exchanging the tokens for the access tokens. Basically every time we try and get them we get a "Invalid / expired token - The oauth_token passed was either not valid or has expired" I looked at the documentation and there's supposed to be a callback to a server when deployed like that which gives the user an "authorized token" but as im developing a desktop app we can't do this. So I assume the token retrieved in 1 is valid for this step. (actually it seems it is: http://vimeo.com/forums/topic:22605) So I'm wondering now am I missing something here on my actual vimeo application account now? is it treating it as a web hosted app with callbacks? all the elements are there for this to work and I've used this same component to create a twitter Oauth login in exactly the same way and it was fine. Thanks in advance, Barry

    Read the article

  • Losing 'post' requests sent to Pylons paster server

    - by Philip McDermott
    I'm sending post requests to a Pylons server (served by paster serve), and if I send them with any frequency many don't arrive at the server. One at a time is ok, but if I fire off a few (or more) within seconds, only a small number get dealt with. If I send with no post data, or with get, it works fine, but putting just one character of data in the post fields causes massive losses. For example, sending 200, 2 will come back. Sending 100 more slowly, 10 will come back. I'm making the requests form inside a Qt application. Tis will work ok (no data): QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); And this will result in only a fraction of the requests being dealt with: QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); I've played around with turning on use_threadpool, and other options (threadpool_workers, threadpool_max_requests = 300), of which some combinations can alter the results slightly (best case 10 responses in 200). If I send similar requests to other (non paster) servers, the replies come back ok, so I'm almost certain its'a paster serve config issue. Any help or advice greatly appreciated. Thanks Philip

    Read the article

  • how to combine widget webapp framework with SEO-friendly CSS and JS files

    - by Ali
    Hi guys, I'm writing a webapp using Zend framework and a homebrew widget system. Every widget has a controller and can choose to render one of many views if it chooses. This really helps us modularize and reconfigure and reuse the widgets anywhere on the site. The Problem is that the views of each widget contain their own JS and CSS code, which leads to very messy HTML code when the whole page is put together. You get pockets of style and script tags everywhere. This is bad for a lot of different reasons as I'm sure you know, but it has a profound effect on our SEO as well. Several solutions that I've been able to come up with: Separate the CSS and JS of every view of every widget into its own file - this has serious drawbacks for load times (many more resources have to be loaded separately) and it makes coding very difficult as now you have to have 3-4 files open just to edit a widget. combine the all the widget CSS into a single file (same with JS) - would also lead to a massive load when someone enters the site, mixes up the CSS and the JS for all widgets so it's harder to keep track of them, and other problems that I'm sure you can think of. Create a system that uses method 1 (separate CSS and JS for every widget), when delivering the page, stitches all CSS and JS together. This obviously needs more processing time and of course the creation of such a system, etc. My Question is what you guys think of these solutions or if there are pre-existing solutions that you know of (or any tech that might help) solve this problem. I really appreciate all of your thoughts and comments!! Thanks guys, Ali

    Read the article

  • MVC design question for forms

    - by kenny99
    Hi, I'm developing an app which has a large amount of related form data to be handled. I'm using a MVC structure and all of the related data is represented in my models, along with the handling of data validation from form submissions. I'm looking for some advice on a good way to approach laying out my controllers - basically I will have a huge form which will be broken down into manageable categories (similar to a credit card app) where the user progresses through each stage/category filling out the answers. All of these form categories are related to the main relation/object, but not to each other. Does it make more sense to have each subform/category as a method in the main controller class (which will make that one controller fairly massive), or would it be better to break each category into a subclass of the main controller? It may be just for neatness that the second approach is better, but I'm struggling to see much of a difference between either creating a new method for each category (which communicates with the model and outputs errors/success) or creating a new controller to handle the same functionality. Thanks in advance for any guidance!

    Read the article

  • How to get max row from table

    - by Odette
    HI I have the following code and a massive problem: WITH CALC1 AS ( SELECT OTQUOT, OTIT01 AS ITEMS, ROUND(OQCQ01 * OVRC01,2) AS COST FROM @[email protected] WHERE OTIT01 <> '' UNION ALL SELECT OTQUOT, OTIT02 AS ITEMS, ROUND(OQCQ02 * OVRC02,2) AS COST FROM @[email protected] WHERE OTIT02 <> '' UNION ALL SELECT OTQUOT, OTIT03 AS ITEMS, ROUND(OQCQ03 * OVRC03,2) AS COST FROM @[email protected] WHERE OTIT03 <> '' UNION ALL SELECT OTQUOT, OTIT04 AS ITEMS, ROUND(OQCQ04 * OVRC04,2) AS COST FROM @[email protected] WHERE OTIT04 <> '' UNION ALL SELECT OTQUOT, OTIT05 AS ITEMS, ROUND(OQCQ05 * OVRC05,2) AS COST FROM @[email protected] WHERE OTIT05 <> '' ORDER BY OTQUOT ASC ) SELECT OTQUOT, ITEMS, MAX(COST) FROM CALC1 WHERE OTQUOT = '04886471' GROUP BY OTQUOT, ITEMS result: 04886471 FEPO5050WCGA24 13.21 04886471 GFRK1650SGL 36.21 04886471 FRA7500GA 12.6 04886471 CGIFESHAZ 11.02 04886471 CGIFESHPDPR 11.79 04886471 GFRK1350DBL 68.23 04886471 RET1.63825GP 32.55 04886471 FRSA 0.12 04886471 GFRK1350SGL 55.94 04886471 GFRK1650DBL 71.89 04886471 FEPO6565WCGA24 16.6 04886471 PCAP5050GA 0.28 04886471 FEPO6565NCPAG24 0.000000 How can I get the result of the row with the Itemcode that has the highest value? In this case I need the result: 04886471 GFRK1650DBL 71.89 but i dont know how to change my code to get that - can anybody please help me?

    Read the article

  • iPhone OpenGL and NSTimer issues

    - by Kyle
    I have an NSTimer that runs at 60hz. With an OpenGL scene loaded and rendering, my game can get 60fps, solid, all day long.. Then if I go and recompile the app, or reload it, it will get 40fps. Same resources loaded. I've been running into this problem for years, and I just want to know why. It's crazy, and I want to know if I should just abandon this stupid Timer. Conditions are not different on my 3GS between loads. It will just get 40fps sometimes. Obviously the clockrate is not different between loads, so the performance figures should be constant given a constant scene. Here is a log of my framerates: A good load: :-) FrameRate: 61 FrameRate: 61 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 61 Now, I'll go ahead and do nothing, recompile, and run: FrameRate: 43 FrameRate: 50 FrameRate: 45 FrameRate: 48 FrameRate: 40 FrameRate: 45 FrameRate: 42 FrameRate: 41 FrameRate: 42 FrameRate: 44 FrameRate: 41 FrameRate: 46 ^- Massive difference visually. What the flying heck could cause this? SAME area of the scene, SAME camera setup. No variables are different.

    Read the article

  • As a newbie, where should I go if I want to create a small GUI program?

    - by jimbmk
    Hello, I'm a newbie with a little experience writing in BASIC, Python and, of all things, a smidgeon of assembler (as part of a videogame ROM hack). I wanted to create small tool for modifying the hex values at particular points, in a particular file, that would have a GUI interface. What I'm looking for is the ability to create small GUI program, that I can distribute as an EXE (or, at least a standalone directory). I'm not keen on the idea of the .NET languages, because I don't want to force people to download a massive .NET framework package. I currently have Python with IDLE and Boa Constructor set up, and the application runs there. I've tried looking up information on compiling a python app that relies on Wxwidgets, but the search results and the information I've found has been confusing, or just completely incomprehensible. My questions are: Is python a good language to use for this sort of project? If I use Py2Exe, will WxWidgets already be included? Or will my users have to somehow install WxWidgets on their machines? Am I right in thinking at Py2Exe just produces a standalone directory, 'dist', that has the necessary files for the user to just double click and run the application? If the program just relies upon Tkinter for GUI stuff, will that be included in the EXE Py2Exe produces? If so, are their any 'visual' GUI builders / IDEs for Python with only Tkinter? Thankyou for your time, JBMK

    Read the article

  • Database sharing/versioning

    - by DarkJaff
    Hi everyone, I have a question but I'm not sure of the word to use. My problem: I have an application using a database to stock information. The database can ben in access (local) or in a server (SQL Server or Oracle). We support these 3 kind of database. We want to give the possibility to the user to do what I think we can call versioning. Let me explain : We have a database 1. This is the master. We want to be able to create a database 2 that will be the same thing as database 1 but we can give it to someone else. They each work on each other side, adding, modifying and deleting records on this very complex database. After that, we want the database 1 to include the change from database 2, but with the possibility to dismiss some of the change. For you information, ou application is already multiuser so why don't we just use this multi-user and forget about this versionning? It's because sometimes, we need to give a copy of the database to another company on another site and they can't connect on our server. They work on their side and then, we want to merge. Is there anyone here with experience with this type of requirement? We have a lot of ideas but most of them require a LOT of work, massive modification to the database or to the existing queries. This is a 2 millions and growing C++ app, so rewriting it is not possible! Thanks for any ideas that you may give us! J-F

    Read the article

  • Default template parameters with forward declaration

    - by Seth Johnson
    Is it possible to forward declare a class that uses default arguments without specifying or knowing those arguments? For example, I would like to declare a boost::ptr_list< TYPE > in a Traits class without dragging the entire Boost library into every file that includes the traits. I would like to declare namespace boost { template<class T> class ptr_list< T >; }, but that doesn't work because it doesn't exactly match the true class declaration: template < class T, class CloneAllocator = heap_clone_allocator, class Allocator = std::allocator<void*> > class ptr_list { ... }; Are my options only to live with it or to specify boost::ptr_list< TYPE, boost::heap_clone_allocator, std::allocator<void*> in my traits class? (If I use the latter, I'll also have to forward declare boost::heap_clone_allocator and include <memory>, I suppose.) I've looked through Stroustrup's book, SO, and the rest of the internet and haven't found a solution. Usually people are concerned about not including STL, and the solution is "just include the STL headers." However, Boost is a much more massive and compiler-intensive library, so I'd prefer to leave it out unless I absolutely have to.

    Read the article

  • Thread Local Memory, Using std::string's internal buffer for c-style Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf(streambuf - strinbuf ??) of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ? My question I guess would be: " is the internal buffer of a string re-usable, and if so, how ?" Edit: See comments to Vlad's answer please.

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • Error handling in PHP

    - by Industrial
    Hi guys, We're building a PHP app based on Good old MVC (codeigniter framework) and have run into trouble with a massive chained action that consists of multiple model calls, that together is a part of a big transaction to the database. We want to be able to do a list of actions and get a status report of each one back from the function, whatever the outcome is. Our first initial idea was to utilize the exceptions of PHP5, but since we want to also need status messages that doesnt break the execution of the script, this was our solution that we came up with. It goes a little something like this: $sku = $this->addSku( $name ); if ($sku === false) { $status[] = 'Something gone terrible wrong'; $this->db->trans_rollback(); return $status; } $image= $this->addImage( $filename); if ($image=== false) { $error[] = 'Image could not be uploaded, check filesize'; $this->db->trans_rollback(); return $status; } Our controller looks like this: $var = $this->products->addProductGroup($array); if (is_array($var)) { foreach ($var as $error) { echo $error . '<br />'; } } It appears to be a very fragile solution to do what we need, but it's neither scalable, neither effective when compared to pure PHP exceptions for instance. Is this really the way that this kind of stuff generally is handled in MVC based apps? Thanks!

    Read the article

  • Ideas for jumping in 2D with Actionscript 3 [included attempt]

    - by befall
    So, I'm working on the basics of Actionscript 3; making games and such. I designed a little space where everything is based on location of boundaries, using pixel-by-pixel movement, etc. So far, my guy can push a box around, and stops when running into the border, or when try to the push the box when it's against the border. So, next, I wanted to make it so when I bumped into the other box, it shot forward; a small jump sideways. I attempted to use this (foolishly) at first: // When right and left borders collide. if( (box1.x + box1.width/2) == (box2.x - box2.width/2) ) { // Nine times through for (var a:int = 1; a < 10; a++) { // Adds 1, 2, 3, 4, 5, 4, 3, 2, 1. if (a <= 5) { box2.x += a; } else { box2.x += a - (a - 5)*2 } } } Though, using this in the function I had for the movement (constantly checking for keys up, etc) does this all at once. Where should I start going about a frame-by-frame movement like that? Further more, it's not actually frames in the scene, just in the movement. This is a massive pile of garbage, I apologize, but any help would be appreciated.

    Read the article

  • Emulating Dynamic Dispatch in C++ based on Template Parameters

    - by Jon Purdy
    This is heavily simplified for the sake of the question. Say I have a hierarchy: struct Base { virtual int precision() const = 0; }; template<int Precision> struct Derived : public Base { typedef Traits<Precision>::Type Type; Derived(Type data) : value(data) {} virtual int precision() const { return Precision; } Type value; }; I want a function like: Base* function(const Base& a, const Base& b); Where the specific type of the result of the function is the same type as whichever of first and second has the greater Precision; something like the following pseudocode: template<class T> T* operation(const T& a, const T& b) { return new T(a.value + b.value); } Base* function(const Base& a, const Base& b) { if (a.precision() > b.precision()) return operation((A&)a, A(b.value)); else if (a.precision() < b.precision()) return operation(B(a.value), (B&)b); else return operation((A&)a, (A&)b); } Where A and B are the specific types of a and b, respectively. I want f to operate independently of how many instantiations of Derived there are. I'd like to avoid a massive table of typeid() comparisons, though RTTI is fine in answers. Any ideas?

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is assembly an absolute must?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • CSS ] how to automatically resize the wrapper div.

    - by Phrixus
    Hi, I've been struggling with this problem.. There is a wrapper div and it contains 3 vertical column divs with full of texts, and this wrapper div has red background color so that it can be a background of the entire texts. <div id="content_wrapper"> <div id="cside_a"> // massive texts goes here </div> ... // two more columns go here. </div> And here is the CSS code for them. #content_wrapper { background-color:#DB0A00; background-repeat:no-repeat; min-height:400px; } #cside_a, #cside_b, #cside_c { float: left; width: 33%; } And this code gives me a background that covers only 400px height box.. My expectation was the wrapper div automatically resizes depending on the size of the divs in it. Somehow putting "overflow:hidden" with wrapper CSS code makes everything work fine. I have no idea why "overflow:hidden" works.. shouldn't this hide all the overflowed texts..? Could anyone explain me why? Is is the correct way to do it anyway?

    Read the article

  • How can I create an executable to run on a certain processor architecture (instead of certain OS)?

    - by CrazyJugglerDrummer
    So I take my C++ program in Visual studio, compile, and it'll spit out a nice little EXE file. But EXEs will only run on windows, and I hear a lot about how C/C++ compiles into assembly language, which is runs directly on a processor. The EXE runs with the help of windows, or I could have a program that makes an executable that runs on a mac. But aren't I compiling C++ code into assembly language, which is processor specific? My Insights: I'm guessing I'm probably not. I know there's an Intel C++ compiler, so would it make processor-specific assembly code? EXEs run on windows, so they advantage of tons of things already set up, from graphics packages to the massive .NET framework. A processor-specific executable would be literally starting from scratch, with just the instruction set of the processor. Would this executable be a file-type? We could be running windows and open it, but then would control switch to processor only? I assume this executable would be something like an operating system, in that it would have to be run before anything else was booted up, and have only the processor instruction set to "use".

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >