Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 281/344 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Copy vector of values to vector of pairs in one line

    - by Kirill V. Lyadvinsky
    I have the following types: struct X { int x; X( int val ) : x(val) {} }; struct X2 { int x2; X2() : x2() {} }; typedef std::pair<X, X2> pair_t; typedef std::vector<pair_t> pairs_vec_t; typedef std::vector<X> X_vec_t; I need to initialize instance of pairs_vec_t with values from X_vec_t. I use the following code and it works as expected: int main() { pairs_vec_t ps; X_vec_t xs; // this is not empty in the production code ps.reserve( xs.size() ); { // I want to change this block to one line code. struct get_pair { pair_t operator()( const X& value ) { return std::make_pair( value, X2() ); } }; std::transform( xs.begin(), xs.end(), back_inserter(ps), get_pair() ); } return 0; } What I'm trying to do is to reduce my copying block to one line with using boost::bind. This code is not working: for_each( xs.begin(), xs.end(), boost::bind( &pairs_vec_t::push_back, ps, boost::bind( &std::make_pair, _1, X2() ) ) ); I know why it is not working, but I want to know how to make it working without declaring extra functions and structs?

    Read the article

  • I like the way they Design/Architecture it but how do I implement this

    - by Rachel
    Summary: I have different components on homepage and each components shows some promotion to the user. I have Cart as one Component and depending upon content of the cart promotion are show. I have to track user online activities and send that information to Omniture for Report Generation. Now my components are loaded asynchronously basically are loaded when AjaxRequest is fired up and so there is not fix pattern or rather information on when components will appear on the webpages. Now in order to pass information to Omniture I need to call track function on $(document).(ready) and append information for each components(7 parameters are required by Omniture for each component). So in the init:config function of each component am calling Omniture and passing paramters but now no. of Omniture calls is directly proportional to no. of Components on the webpage but this is not acceptable as each call to Omniture is very expensive. Now I am looking for a way where in I can club the information about 7 parameters and than make one Call to Omniture wherein I pass those information. Points to note is that I do not know when the components are loaded and so there is no pre-defined time or no. of components that would be loaded. The thing is am calling track function when document is ready but components are loaded after call to Omniture has been made and so my question is Q: How can I collect the information for all the components and than just make one call to Omniture to send those information ? As mentioned, I do not know when the components are loaded as they are done on the Ajax Request. Hope I am able to explain my challenge and would appreciate if some one can provide from Design/Architect Solutions for the Challenge.

    Read the article

  • How do I fix a broken connection to DB2 from a web application?

    - by Eddie White
    I support some old web applications, VBScript-based ASP for the UI and VB6 COM modules for the business and data access layers. Last weekend, I installed DB2 Connect Enterprise Edition v8 fixpack 14 on several Windows 2000 servers, and one of the web apps errors out on null data when it calls the built in VBScript function FormatNumber. This numeric data is retrieved by a SQL Server query, but the only way the SQL Server column is populated is with the calculated results returned from a DB2 query earlier in a progression through several pages. When I installed DB2 Connect EE, one of the components loaded was MDAC 2.7. I followed corporate instructions and had the installation save an ODBC System Data Source, which reported a good connection when I tested it after the install. For what it's worth, the project references in the production VB6 modules pointed to MDAC 2.5. I have tried recompiling and deploying to COM on my test server new versions of the VB6 modules referencing MDAC 2.7. My development environment is Windows XP Pro, with MDAC 2.8 and DB2 Connect EE v9.5 installed. When I deployed the updated VB6 dlls, the CreateObject fails to instantiate the classes with the error message that "The class does not support automation or the requested interface". I've rolled the DB2 Connect install back and have reinstall v8 of the DB2 runtime client, which was the previous environment. The problem, however, persists.

    Read the article

  • Cannot upload media via Wordpress uploader

    - by Justin Johnson
    This has to do with media uploading in Wordpress. Every time WP creates a folder for new uploads (it organizes uploads by year and month: yyyy/mm), it creates it with the "apache:apache' user and group, with full access to all (777 or drwxrwxrwx). However, after that, WP cannot create a folder within that folder (e.g.: mkdir 2011 succeeds, but mkdir 2011/01 fails). Also, uploads cannot be moved into these newly created folders even though the permissions are 777 (rwxrwxrwx). Once a month, I have to chown the newly created folders to be the same as user:group as the rest of the files. Once I do that, uploading works fine (which doesn't make sense to me The really frustrating part is that this problem doesn't exist in other WP installs on other domains on the same server. * I wasn't sure if this should be here or on serverfault. Edit: The containing directory /.../httpdocs/blog/wp-content/uploads has the correct ownership drwxrwxrwx 5 myuser psaserv 4096 Jun 3 18:38 uploads This is a Plesk/CentOS environment hosted by Media Temple (dv). I've written the following test script to simulate the problem <pre><?php $d = "d" . mt_rand(100, 500); var_dump( get_current_user(), $d, mkdir($d), chmod($d, 0777), mkdir("$d/$d"), chmod("$d/$d", 0777), fileowner($d), getmyuid() ); The script always creates the first directory mkdir($d) successfully. On domain A, where the WP problem is, it cannot create the nested directory mkdir("$d/$d"). However, on domain B, both directories are successfully created. I am running each script at /var/www/vhosts/domainA/httpdocs/tmp/t.php and /var/www/vhosts/domainB/httpdocs/tmp/t.php respectively I checked the permissions on tmp, httpdocs, and domain[AB] and they are the same for each path. The only thing that differs is the user.

    Read the article

  • How to deploy SQL Reporting 2005 when Data Sources are locked?

    - by spoulson
    The DBAs here maintain all SQL Server and SQL Reporting servers. I have a custom developed SQL Reporting 2005 project in Visual Studio that runs fine on my local SQL Database and Reporting instances. I need to deploy to a production server, so I had a folder created on a SQL Reporting 2005 server with permissions to upload files. Normally, a deploy from within Visual Studio is all that is needed to upload the report files. However, for security purposes, data sources are maintained explicitly by DBAs and stored in a separated locked down common folder on the reporting server. I had them create the data source for me. When I attempt to deploy from VS, it gives me the error "The item '/Data Sources' already exists." I get this whether I'm deploying the whole project or just a single report file. I already set OverwriteDataSources=false in the project properties. The TargetServer URL and folder are verified correct. I suppose I could copy the files manually, but I'd like to be able to deploy from within VS. What could I be doing wrong?

    Read the article

  • delivery mechanism, Rational ClearCase

    - by kadaba
    Hi All, We came up with a stream structure for the Rational ClearCase UCM model. We recently migrated the code base into the new setup. We had three different code bases, i.e. three physical code bases. The way migration was done in this way. we moved the production code first, created a baseline. Then the uat code and created a baseline and then the development code and created a baseline. As of now the integration stream has the latest baseline that is the development baseline. Now we have other two streams for the prd and the uat from which the release will be done in the respective environments. I have my dev stream now. I create an activity and make some changes. now I need to promote these changes into the uat environment. If I deliver the changes to the integration stream, merge is done but on a development basline. I do not want to rebase it to uat as many development apps wil get rebased into the uat which is not desired. How do I achieve promoting changes to the uat environment(uat stream). kindly advice.

    Read the article

  • Sinatra application running on Dreamhost suddenly not working

    - by jbrennan
    My Sinatra application was running fine on Dreamhost until a few days ago (I'm not sure precisely when it went bad). Now when I visit my app I get this error: can't activate rack (~> 1.1, runtime) for ["sinatra-1.1.2"], already activated rack-1.2.1 for [] I have no idea how to fix this. I've tried updating all my gems, then touching the app/tmp/restart.txt file, but still no fix. I hadn't touched any files of my app, nor my Dreamhost account. It just busted on its own (my guess is DH changed something on their server which caused the bust). When I originally deployed my app, I had to go through some hoops to get it working, and I seem to think I was using gems in a custom location, but I can't remember exactly where or how. I don't know my way around Rack/Passenger very well. Here's my config.ru: (mostly grafted from around the web, I don't fully understand it) ENV['RACK_ENV'] = 'development' if ENV['RACK_ENV'].empty? #### Make sure my own gem path is included first ENV['GEM_HOME'] = "#{ENV['HOME']}/.gems" ENV['GEM_PATH'] = "#{ENV['HOME']}/.gems:" require 'rubygems' Gem.clear_paths ## NB! key part require 'sinatra' set :env, :production disable :run require 'MY_APP_NAME.rb' run Sinatra::Application

    Read the article

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

  • Changing save directory

    - by Nathan
    Disregard my previous pre-edit post. Rethought what I needed to do. This is what I'm currently doing: https://developer.mozilla.org/en/XPCOM_Interface_Reference/nsIScriptableIO downloadFile: function(httpLoc) { try { //new obj_URI object var obj_URI = Cc["@mozilla.org/network/io-service;1"].getService(Ci.nsIIOService). newURI(httpLoc, null, null); //new file object //set file with path obj_TargetFile.initWithPath("C:\\javascript\\cache\\test.pdf"); //if file doesn't exist, create if(!obj_TargetFile.exists()) { obj_TargetFile.create(0x00,0644); } //new persitence object var obj_Persist = Cc["@mozilla.org/embedding/browser/nsWebBrowserPersist;1"]. createInstance(Ci.nsIWebBrowserPersist); // with persist flags if desired const nsIWBP = Ci.nsIWebBrowserPersist; const flags = nsIWBP.PERSIST_FLAGS_REPLACE_EXISTING_FILES; obj_Persist.persistFlags = flags | nsIWBP.PERSIST_FLAGS_FROM_CACHE; //save file to target obj_Persist.saveURI(obj_URI,null,null,null,null,obj_TargetFile); return true; } catch (e) { alert(e); } },//end downloadFile As you can see the directory is hardcoded in there, I want to save the pdf file open in the active tab to a relative temporary directory, or anywhere that's out of the way enough that the user won't stumble along and delete it. I'm going to try that using File I/O, I was under the impression that what I was looking for was in scriptable file I/O and thus disabled.

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • php file force download

    - by droidus
    When I use this code to download this image (only used for testing purposes), I open the downloaded image, and all it gives me is an error. i tried it in chrome. opening it with windows photo viewer, it says that it can't display the picture because it is empty??? here is the code: <?PHP // Define the path to file $file = 'http://www.media.lonelyplanet.com/lpi/12553/12553-11/469x264.jpg'; if(!file) { // File doesn't exist, output error die('file not found'); } else { header('Content-Description: File Transfer'); header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename='.basename($file)); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); header('Content-Length: ' . filesize($file)); ob_clean(); flush(); readfile($file); exit; } ?>

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • .NET WebBrowser control with Adobe SVG Viewer

    I'm having problems with SVG files loading in a .NET 2.0 WebBrowser control. If I create a new WinForms application project, drag a WebBrowser control and a button onto the Form1.cs design surface, and add a line to the button's on-click handler to set the Url property of the WebBrowser control to an SVG file, I get two IE script errors at runtime (as in, the dialog you get when there is a Javascript problem). The only line of code I'm writing is therefore: webBrowser1.Url = new Uri(@"http://wiki.allegro.cc/pub/f/fb/Grozilla.svg"); This SVG file loads fine if I browse directly to the link with IE; the errors I get via my test app are: line 2, char 1, error: Invalid character, followed by line 1, char 1, error: Object expected (assuming I answer yes to the prompt to 'continue running scripts on this page'). I'm using IE 7.0.5730.13, Adobe SVG Viewer 3.03 build 94, Visual Studio 2008. Can anyone replicate this? Has anyone seen it/got any idea how to fix it? (edit: previously setting the URL in the constructor for the purposes of the example, but this raised the question of whether or not the control had finished initialising, so have changed the example to use a button - the problem still occurs. This bug originally appeared in production code so the example program I wrote was an attempt to isolate the issue and reproduce it as simply as possible.) (edit 2: having tested on several different machines, this problems seems to be related to IE7 - run IE6, and everything works fine.)

    Read the article

  • Ruby and duck typing: design by contract impossible?

    - by davetron5000
    Method signature in Java: public List<String> getFilesIn(List<File> directories) similar one in ruby def get_files_in(directories) In the case of Java, the type system gives me information about what the method expects and delivers. In Ruby's case, I have no clue what I'm supposed to pass in, or what I'll expect to receive. In Java, the object must formally implement the interface. In Ruby, the object being passed in must respond to whatever methods are called in the method defined here. This seems highly problematic: Even with 100% accurate, up-to-date documentation, the Ruby code has to essentially expose its implementation, breaking encapsulation. "OO purity" aside, this would seem to be a maintenance nightmare. The Ruby code gives me no clue what's being returned; I would have to essentially experiment, or read the code to find out what methods the returned object would respond to. Not looking to debate static typing vs duck typing, but looking to understand how you maintain a production system where you have almost no ability to design by contract. Update No one has really addressed the exposure of a method's internal implementation via documentation that this approach requires. Since there are no interfaces, if I'm not expecting a particular type, don't I have to itemize every method I might call so that the caller knows what can be passed in? Or is this just an edge case that doesn't really come up?

    Read the article

  • Is the RESTORE process dependent on schema?

    - by Martin Aatmaa
    Let's say I have two database instances: InstanceA - Production server InstanceB - Test server My workflow is to deploy new schema changes to InstanceB first, test them, and then deploy them to InstanceA. So, at any one time, the instance schema relationship looks like this: InstanceA - Schema Version 1.5 InstanceB - Schema Version 1.6 (new version being tested) An additional part of my workflow is to keep the data in InstanceB as fresh as possible. To fulfill this, I am taking the database backups of InstanceA and applying them (restoring them) to InstanceB. My question is, how does schema version affect the restoral process? I know I can do this: Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.5 But can I do this? Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.6 (new version being tested) If no, what would the failure look like? If yes, would the type of schema change matter? For example, if Schema Version 1.6 differed from Schema Version 1.5 by just having an altered storec proc, I imagine that this type of schema change should't affect the restoral process. On the other hand, if Schema Version 1.6 differed from Schema Version 1.5 by having a different table definition (say, an additional column), I image this would affect the restoral process. I hope I've made this clear enough. Thanks in advance for any input!

    Read the article

  • firefox lead dot in cookie issue

    - by Jon
    Hi all, We are having an annoying issue with Firefox and cookies. We have the following domains: sub1.mydomain.com sub2.mydomain.com sub3.mydomain.com otherdomain.com We have converting our framework to be multilingual and providing a drop down to change the language at any point during site. The code base is shared across all the domains above. We can not set a cookie across all "mydomain.com" sites, they have to be on each of the sub domains. To get this to work we set a JavaScript cookie when the users chooses a new language. When the page posts back to the server the code picks this up and sets the users preferences to that new language code, (this is all C# and ASP.NET). We have to set the host to be "subX.mydomain.com" and the path to be "/" in the cookie so that it is just for the subdomain and all parts of that domain. This works great on all browsers apart from FireFox. It seems that firefox will pre append a DOT to the beginning of domain so ".subX.mydomain.com". When the code posts back with FireFox the cookie is always null. Has anyone had this situation, (I imagine it is not al that uncommon). I have read a lot of people saying, remove the domain from the cookie, but that can not work for us as we have multiple subdomains that need their own cookie values. Thanks

    Read the article

  • How to suppress error message details to general DNN Users

    - by thames
    I have a DNN site (05.02.03) in test and nearing release into production and I would like to suppress the details of error messages (i.e. Null Reference Exception, and others) to general users (admins can still see the details). Debug is off in the web.config. By suppressing, I mean the only error message I want to display to the general user (all users) is something like "An Exception has occured". I don't want the details of that exception to be displayed to the general user. I still want it logged in greater detail in the Event Viewer. How would I go about doing this? Update: I have "Use Custom Error Messages" checked. Which shows a error message like: A critical error has occurred.[vbCrLf] Object reference not set to an instance of an object. I want just the "A critical error has occured." error message to be displayed to general users. I don't want the "Object referece not set to an instance of an object." to be displayed to general users

    Read the article

  • Directly call distutils' or setuptools' setup() function with command name/options, without parsing

    - by Ryan B. Lynch
    I'd like to call Python's distutils' or setuptools' setup() function in a slightly unconventional way, but I'm not sure whether distutils is meant for this kind of usage. As an example, let's say I currently have a 'setup.py' file, which looks like this (lifted verbatim from the distutils docs--the setuptools usage is almost identical): from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='[email protected]', url='http://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) Normally, to build just the .spec file for an RPM of this module, I could run python setup.py bdist_rpm --spec-only, which parses the command line and calls the 'bdist_rpm' code to handle the RPM-specific stuff. The .spec file ends up in './dist'. How can I change my setup() invocation so that it runs the 'bdist_rpm' command with the '--spec-only' option, WITHOUT parsing command-line parameters? Can I pass the command name and options as parameters to setup()? Or can I manually construct a command line, and pass that as a parameter, instead? NOTE: I already know that I could call the script in a separate process, with an actual command line, using os.system() or the subprocess module or something similar. I'm trying to avoid using any kind of external command invocations. I'm looking specifically for a solution that runs setup() in the current interpreter. For background, I'm converting some release-management shell scripts into a single Python program. One of the tasks is running 'setup.py' to generate a .spec file for further pre-release testing. Running 'setup.py' as an external command, with its own command line options, seems like an awkward method, and it complicates the rest of the program. I feel like there may be a more Pythonic way.

    Read the article

  • vs10 not deploying all required files - then not over-writing updated files

    - by justSteve
    I'm in the habit of deploying to alternating folders (/inetpub/wwwroot/mySite & /inetpub/wwwroot/mySite2) so if something unexpected happens with the deploy i can quickly swap back to a previous version just by changing the path in IIS So i was deploying an MVC2 webapp to a empty folder figuring that VS would send up all the files it needs. Not even close. Initially, it didn't even upload a couple required nHibernate.dlls. Later, after manually copying files referenced in the thrown exceptions, i just copied all the files from the previous compile and then re-published over the top expecting VS to over-write the changed files. Failed that too. No reports of errors by VS....just failed to over-write a number of pre-existing (but changed/updated) files. Hard to believe these kinds of errors (and lack of feedback that errors were encountered) in a state of the art tool like VS. Clearly, I'm doing something wrong. I'm using VisualSVN for source control and connect to my colocated server via a VPN-based mapped network drive (so I can use FileSystem to publish). (both of which can complicate file properties) VS08 had more choices for which files it would send up - i found i needed to use the 'All files in source' on an initial deployment, the 'Replace Matching'. If I choose 'delete all existing...' I'd be back to square 1 and have to deploy with the 'All files in source project folder'. But VS10 doesn't have the 'All files in source project folder. I ended up manually copying the files - which seems not right in the extreme. Are these known issues others have to deal with? What's best practice for deploying a web-app? thx

    Read the article

  • A really smart rails helper needed

    - by Stefan Liebenberg
    In my rails application I have a helper function: def render_page( permalink ) page = Page.find_by_permalink( permalink ) content_tag( :h3, page.title ) + inline_render( page.body ) end If I called the page "home" with: <%= render_page :home %> and "home" page's body was: <h1>Home<h1/> bla bla <%= render_page :about %> <%= render_page :contact %> I would get "home" page, with "about" and "contact", it's nice and simple... right up to where someone goes and changes the "home" page's content to: <h1>Home<h1/> bla bla <%= render_page :home %> <%= render_page :about %> <%= render_page :contact %> which will result in a infinite loop ( a segment fault on webrick )... How would I change the helper function to something that won't fall into this trap? My first attempt was along the lines of: @@list = [] def render_page( permalink ) unless list.include?(permalink) @@list += [ permalink ] page = Page.find_by_permalink content_tag( :h3, page.title ) + inline_render( page.body ) @@list -= [ permalink ] else content_tag :b, "this page is already being rendered" end end which worked on my development environment, but bombed out in production... any suggestions? Thank You Stefan

    Read the article

  • Build number increment not reflected in AssemblyVersion

    - by awshepard
    I've browsed through some of the discussion on auto-incrementing build numbers, but in the impatience of youth decided to roll my own and re-invent the wheel. I know there are probably better ways to go about this (which I'm definitely going to investigate), but my question centers more around the Assembly and/or Version classes. My approach was to write a separate exe (BuildIncrementer) that takes a command line parameter for file name, does a regex match on the contents to grab the [assembly: AssemblyVersion...] string, do the modifications that I want (increment the build number, etc.), then write the contents back to the file. This approach works as-is. The next thing I did was in the project that I wanted to use this on, I set up a pre-build command line that is simply the command to execute that BuildIncrementer.exe on this project's AssemblyInfo.cs file. This too works, updating the assembly info as desired. The problem comes when I run the project, it sends an email containing the current version, obtained with Assembly.GetExecutingAssembly().GetName().Version.ToString(). BUT, the version showing up is the previous version. When my AssemblyInfo.cs says [assembly: AssemblyVersion("1.0.2.49667")], I get sent 1.0.1.45660, which was the previous build. Anyone have any ideas why that might be?

    Read the article

  • Making alpha PNGs with PHP GD

    - by WiseDonkey
    Hello, I've got a problem making alpha PNGs with PHP GD. I don't have imageMagik etc. Though the images load perfectly well in-browser and in GFX programs, I'm getting problems with Flash AS3 (actionscript) understanding the files. It complains of being an unknown type. But, exporting these files from Fireworks to the same spec works fine. So I'm suggesting it's something wrong with the formatting in PHP GD. There seems to be a number of ways of doing this, with several similar functions; so maybe this isn't right? $image_p = imagecreatetruecolor($width_orig, $height_orig); $image = imagecreatefrompng($filename); imagealphablending($image_p, false); ImageSaveAlpha($image_p, true); ImageFill($image_p, 0, 0, IMG_COLOR_TRANSPARENT); imagealphablending($image_p, true); imagecopyresampled($image_p, $image, 0, 0, 0, 0, $width_orig, $height_orig, $width_orig, $height_orig); imagepng($image_p, "new2/".$filename, 0); imagedestroy($image_p); This just takes files it's given and puts them into new files with a specified width/height - for this example it's same as original but in production it resizes, which is why I'm resampling.

    Read the article

  • Reporting Services keeps erasing my dataset parameters

    - by Dustin Brooks
    I'm using a web service and every time I change something on the dataset, it erases all my parameters. The weird thing is, I can execute the web service call from the data tab and it prompts for all my parameters, but if I click to edit the data the list is empty or if I try to preview the report it blows up because parameters are missing. Just wondering if anyone else has experienced this and if there is a way to prevent this behavior. Here is a copy of the dataset, not that I think it matters. This has to be the most annoying bug (if its a bug) ever. I can't even execute the dataset from the designer without it erasing my parameter list. When you have about 10 parameters and you are making all kinds of changes to a new report, it becomes very tedious to be constantly re-typing the same list over and over. If anything, studio should at least be able to pre-populate with the parameters the service is asking for. sigh Wheres my stress ball... <Query> <Method Namespace="http://www.abc.com/" Name="TWRPerformanceSummary"/> <SoapAction>http://www.abc.com/TWRPerformanceSummary</SoapAction> <ElementPath IgnoreNamespaces="true"> TWRPerformanceSummaryResponse/TWRPerformanceSummaryResult/diffgram/NewDataSet/table{StockPerc,RiskBudget,Custodian,ProductName,StartValue(decimal),EndValue(decimal),CostBasis(decimal)} </ElementPath> </Query>

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >