Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 287/344 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • XCode 4.4 bundle version updates not picked up until subsequent build

    - by Mark Struzinski
    I'm probably missing something simple here. I am trying to auto increment my build number in XCode 4.4 only when archiving my application (in preparation for a TestFlight deployment). I have a working shell script that runs on the target and successfully updates the info.plist file for each build. My build configuration for archiving is name 'Ad-Hoc'. Here is the script: if [ $CONFIGURATION == Ad-Hoc ]; then echo "Ad-Hoc build. Bumping build#..." plist=${PROJECT_DIR}/${INFOPLIST_FILE} buildnum=$(/usr/libexec/PlistBuddy -c "Print CFBundleVersion" "${plist}") if [[ "${buildnum}" == "" ]]; then echo "No build number in $plist" exit 2 fi buildnum=$(expr $buildnum + 1) /usr/libexec/Plistbuddy -c "Set CFBundleVersion $buildnum" "${plist}" echo "Bumped build number to $buildnum" else echo $CONFIGURATION " build - Not bumping build number." fi This script updates the plist file appropriately and is reflected in XCode each time I archive. The problem is that the .ipa file that comes out of the archive process is still showing the previous build number. I have tried the following solutions with no success: Clean before build Clean build folder before build Move Run Script phase to directly after the Target Dependencies step in Build Phases Adding the script as a Run Script action in my scheme as a pre-action No matter what I do, when I look at the build log, I see that the info.plist file is being processed as one of the very first steps. It is always prior to my script running and updating the build number, which is, I assume, why the build number is never current in the .ipa file. Is there a way to force the Run Script phase to run before the info.plist file is processed?

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • How can i use a commandlinetool (ie. sox) via subprocess.Popen with mod_wsgi?

    - by marue
    I have a custom django filefield that makes use of sox, a commandline audiotool. This works pretty well as long as i use the django development server. But as soon as i switch to the production server, using apache2 and mod_wsgi, mod_wsgi catches every output to stdout. This makes it impossible to use the commandline tool to evaluate the file, for example use it to check if the uploaded file really is an audio file like this: filetype=subprocess.Popen([sox,'--i','-t','%s'%self.path], shell=False,\ stdout=subprocess.PIPE, stderr=subprocess.PIPE) (filetype,error)=filetype.communicate() if error: raise EnvironmentError((1,'AudioFile error while determining audioformat: %s'%error)) Is there a way to workaround for this? edit the error i get is "missing filename". I am using mod_wsgi 2.5, standard with ubuntu 8.04. edit2 What exactly happens, when i call subprocess.Popen from within django in mod_wsgi? Shouldn't subprocess stdin/stdout be independent from django stdin/stdout? In that case mod_wsgi should not affect programms called via subprocess... I'm really confused right now, because the file i am trying to access is a temporary file, created via a filenamevariable that i pass to the file creation and the subprocess command. That file is being written to /tmp, where the rights are 777, so it can't be a rights issue. And the error message is not "file does not exist", but "missing filename", which suggests i am not passing a filename as parameter to the commandlinetool.

    Read the article

  • iphone table view check mark accessory problem

    - by Pankaj Kainthla
    I have a table with 50 rows. I want to select particular rows with checkmark accessory but when i select some rows and scroll down the table then i see pre checked rows. I know that table cell are reused but i want to emit this problem what can i do about this? - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // return [array count]; return 50; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; TableViewCell *cell = (TableViewCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[TableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.text=[NSString stringWithFormat:@"%i",[indexPath row]]; return cell; } // Override to support row selection in the table view. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath]; if (cell.accessoryType == UITableViewCellAccessoryNone) { cell.accessoryType = UITableViewCellAccessoryCheckmark; } else { cell.accessoryType = UITableViewCellAccessoryNone; }

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • How to find an embedded platform?

    - by gmagana
    I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction. Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do: Required: 2 RS232 serial ports (one used all the time for primary UI, the second one not continuous) 1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both] Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files) RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable. Nice to have: USB port(s) Ethernet network port Wireless network Implementation languages (any O/S I will adapt to): First choice Java/C# (Mono ok) Second choice is C/Pascal Third is BASIC Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size. Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-). EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • [CODE GENERATION] How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS...

    Read the article

  • function taking in an input image and different kernel size

    - by drifterOcean19
    I have this filtering function that takes an input image, performs convolution using a given kernel, and returns the resulting image. However, I can't seem to work it out how to make it takes different kernel sizes.For example instead of pre-defined 3x3 kernel as below in the code, it could instead take 5x5 or 7x7. and then the user could input the type of kernel/filter they want(Depending on the intended effect). I can't seem to put my head around it. i'm quite new to matlab. function [newImg] = kernelFunc(imgB) img=imread(imgB); figure,imshow(img); img2=zeros(size(img)+2); newImg=zeros(size(img)); for rgb=1:3 for x=1:size(img,1) for y=1:size(img,2) img2(x+1,y+1,rgb)=img(x,y,rgb); end end end for rgb=1:3 for i= 1:size(img2,1)-2 for j=1:size(img2,2)-2 window=zeros(9,1); inc=1; for x=1:3 for y=1:3 window(inc)=img2(i+x-1,j+y-1,rgb); inc=inc+1; end end kernel=[1;2;1;2;4;2;1;2;1]/16; med=window.*kernel; disp(med); med=sum(med); med=floor(med); newImg(i,j,rgb)=med; end end end newImg=uint8(newImg); figure,imshow(newImg); end

    Read the article

  • Git: changes not reflecting on other checkouts - huh?

    - by Chad Johnson
    Okay, so, I have my branches (git branch -a): * chat master remotes/origin/HEAD -> origin/master remotes/origin/chat I make changes (still with the 'chat' branch checkout out), commit, and push. I go to my server, on which I have a clone of the repository, and I do a fetch: git getch then I switch to the chat branch: git checkout --track -b chat origin/chat and I event do a pull, just to make sure everything is up to date: git pull and my changes from my other computer are NOT. THERE. What the heck am I doing wrong? If I had hair, I would have pulled it out. Thankfully I am bald. When I try a 'git commit' again, I get this # On branch chat # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/controllers/chat_controller.rb # modified: app/views/dashboard/index.html.erb # modified: app/views/dashboard/layout.js.erb # modified: app/views/layouts/dashboard.html.erb # deleted: app/views/project/.tmp_edit.html.erb.55742~ # deleted: app/views/project/.tmp_edit.html.erb.83482~ # modified: public/stylesheets/dashboard/layout.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # .loadpath # .project # config/database.yml # config/environments/development.yml # config/environments/production.yml # config/environments/test.yml # log/ no changes added to commit (use "git add" and/or "git commit -a")

    Read the article

  • Flex 3 - Image cache

    - by BS_C3
    Hello Community. I'm doing an Image Cache following this method: http://www.brandondement.com/blog/2009/08/18/creating-an-image-cache-with-actionscript-3/ I copied the two as classes, renaming them CachedImage and CachedImageMap. The thing is that I don't want to store the image after being loaded a first time, but while the application is being loaded. For that, I've created a function that is called by the application pre-initialize event. This is how it looks: private function loadImages():void { var im:CachedImage = new CachedImage; var sources:ArrayCollection = new ArrayCollection; for each(var cs in divisionData.division.collections.collection.collectionSelection) { sources.addItem(cs.toString()); } for each(var se in divisionData.division.collections.collection.searchEngine) { sources.addItem(se.toString()); } for each( var source:String in sources) { im.source = source; im.load(source); } } The sources are properly retrieved. However, even if I use the load method, I do not get the "complete" event... As if the image is not being loaded... How is that? Any help would be appreciated. Thanks in advance. Regards, BS_C3

    Read the article

  • Using `<List>` when dealing with pointers in C#.

    - by Gorchestopher H
    How can I add an item to a list if that item is essentially a pointer and avoid changing every item in my list to the newest instance of that item? Here's what I mean: I am doing image processing, and there is a chance that I will need to deal with images that come in faster than I can process (for a short period of time). After this "burst" of images I will rely on the fact that I can process faster than the average image rate, and will "catch-up" eventually. So, what I want to do is put my images into a <List> when I acquire them, then if my processing thread isn't busy, I can take an image from that list and hand it over. My issue is that I am worried that since I am adding the image "Image1" to the list, then filling "Image1" with a new image (during the next image acquisition) I will be replacing the image stored in the list with the new image as well (as the image variable is actually just a pointer). So, my code looks a little like this: while (!exitcondition) { if(ImageAvailabe()) { Image1 = AcquireImage(); ImgList.Add(Image1); } if(ImgList.Count 0) { ProcessEngine.NewImage(ImgList[0]); ImgList.RemoveAt(0); } } Given the above, how can I ensure that: - I don't replace all items in the list every time Image1 is modified. - I don't need to pre-declare a number of images in order to do this kind of processing. - I don't create a memory devouring monster. Any advice is greatly appreciated.

    Read the article

  • Visual C++ doesn't operator<< overload

    - by PierreBdR
    I have a vector class that I want to be able to input/output from a QTextStream object. The forward declaration of my vector class is: namespace util { template <size_t dim, typename T> class Vector; } I define the operator<< as: namespace util { template <size_t dim, typename T> QTextStream& operator<<(QTextStream& out, const util::Vector<dim,T>& vec) { ... } template <size_t dim, typename T> QTextStream& operator>>(QTextStream& in,util::Vector<dim,T>& vec) { .. } } However, if I ty to use these operators, Visual C++ returns this error: error C2678: binary '<<' : no operator found which takes a left-hand operand of type 'QTextStream' (or there is no acceptable conversion) A few things I tried: Originaly, the methods were defined as friends of the template, and it is working fine this way with g++. The methods have been moved outside the namespace util I changed the definition of the templates to fit what I found on various Visual C++ websites. The original friend declaration is: friend QTextStream& operator>>(QTextStream& ss, Vector& in) { ... } The "Visual C++ adapted" version is: friend QTextStream& operator>> <dim,T>(QTextStream& ss, Vector<dim,T>& in); with the function pre-declared before the class and implemented after. I checked the file is correctly included using: #pragma message ("Including vector header") And everything seems fine. Doesn anyone has any idea what might be wrong?

    Read the article

  • How to store session values with Node.js and mongodb?

    - by Tirithen
    How do I get sessions working with Node.js, [email protected] and mongodb? I'm now trying to use connect-mongo like this: var config = require('../config'), express = require('express'), MongoStore = require('connect-mongo'), server = express.createServer(); server.configure(function() { server.use(express.logger()); server.use(express.methodOverride()); server.use(express.static(config.staticPath)); server.use(express.bodyParser()); server.use(express.cookieParser()); server.use(express.session({ store: new MongoStore({ db: config.db }), secret: config.salt })); }); server.configure('development', function() { server.use(express.errorHandler({ dumpExceptions: true, showStack: true })); }); server.configure('production', function() { server.use(express.errorHandler()); }); server.set('views', __dirname + '/../views'); server.set('view engine', 'jade'); server.listen(config.port); I'm then, in a server.get callback trying to use req.session.test = 'hello'; to store that value in the session, but it's not stored between the requests. It probobly takes something more that this to store session values, how? Is there a better documented module than connect-mongo?

    Read the article

  • Heroku app throws "Internal Server Error"

    - by picardo
    This app works just fine on my local computer. After pushing it to Heroku, static pages appear to be working but the blog section throws an Internal Server Error. I pulled the logs by running "heroku logs" and this is what I get: ==> production.log <== /usr/ruby1.8.7/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in `start' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/server.rb:156:in `start' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in `start' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/runner.rb:177:in `send' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/runner.rb:177:in `run_command' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/runner.rb:143:in `run!' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/bin/thin:6 Something wrong with the eventmachine gem, I suppose....but it works fine on my machine. So I'm not sure what's going on or how to debug it.

    Read the article

  • Reorganizing development environment for single developer/small shop

    - by Matthew
    I have been developing for my company for approximately three years. We serve up a web portal using Microsoft .NET and MS SQL Server on DotNetNuke. I am going to leave my job full time at the end of April. I am leaving on good terms, and I really care about this company and the state of the web project. Because I haven't worked in a team environment in a long time, I have probably lost touch with what 'real' setups look like. When I leave, I predict the company will either find another developer to take over, or at least have developers work on a contractual basis. Because I have not worked with other developers, I am very concerned with leaving the company (and the developer they hire) with a jumbled mess. I'd like to believe I am a good developer and everything makes sense, but I have no way to tell. My question, is how do I set up the development environment, so the company and the next developer will have little trouble getting started? What would you as a developer like in place before working on a project you've never worked on? Here's some relevant information: There is a development server onsite and a production server offsite in a data center . There is a server where backups and source code (Sourcegear Vault) are stored. There is no formal documentation but there are comments in the code. The company budget is tight so free suggestions will help the best. I will be around after the end of April on a consulting basis so I can ask simple questions but I will not be available full time to train someone

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • Call ASP.NET 2.0 Server side code from Javascript

    - by Kannabiran
    I'm struggling with this for the past 3 days. I need to call asp.net serverside code from Javascript when the user closes the browser. I'm using the following code to accomplish this. In my asp.net form I have various validation controls. Even if there are some validation errors, When I close the form the server side code works perfectly in my development box(windows 7). But the same code doesnt work in my production environment(windows server). Does it have something to do with the Validation summary or Validation controls. The button control has Causes validation set to false. So even if there is a validation error still my form will post back. Am I correct? I suspect the form is not getting post back to the server when there is a validation error. But i'm disabling all the validation controls in the javascript before calling the button click event. Can someone throw some light on this issue. There are few blogs which suggests to use JQUERY, AJAX (Pagemethods and script manager). function ConfirmClose(e) { var evtobj = window.event ? event : e; if (evtobj == e) { //firefox if (!evtobj.clientY) { evtobj.returnValue = message; } } else { //IE if (evtobj.clientY < 0) { DisablePageValidators(); document.getElementById('<%# buttonBrowserCloseClick.ClientID %>').click(); } } } function DisablePageValidators() { if ((typeof (Page_Validators) != "undefined") && (Page_Validators != null)) { var i; for (i = 0; i < Page_Validators.length; i++) { ValidatorEnable(Page_Validators[i], false); } } } //HTML <div style="display:none" > <asp:Button ID="buttonBrowserCloseClick" runat="server" onclick="buttonBrowserCloseClick_Click" Text="Button" Width="141px" CausesValidation="False" /> //Server Code protected void buttonBrowserCloseClick_Click(object sender, EventArgs e) { //Some C# code goes here }

    Read the article

  • passenger won't spawn more than 6 instances despite passenger_max_pool_size = 30

    - by mrD
    I have some problems with passenger + nginx and hope someone might be able help me and direct me in the right direction. I've set the passenger_max_pool_size to 30 but passenger never spawns more than 6 instances. I'm loading a webpage that uses ajax to load 30 sub pages from the server but because passenger only spawns 6 instances they are queued. What makes me confused is that Waiting on global queue is 0 but I can see in my browser that everything gets queued. When the first 6 ajax requests are done the next 6 starts loading. What am I missing? :) This is the output from passenger-status (I had about 24 requests in the browser waiting for response from the server when I checked this status) ----------- General information ----------- max = 30 count = 6 active = 6 inactive = 0 Waiting on global queue: 0 ----------- Domains ----------- /srv/rails/production/current: PID: 28428 Sessions: 1 Processed: 42 Uptime: 5m 43s PID: 28424 Sessions: 1 Processed: 23 Uptime: 5m 43s PID: 28422 Sessions: 1 Processed: 7 Uptime: 5m 43s PID: 28420 Sessions: 1 Processed: 22 Uptime: 6m 0s PID: 28426 Sessions: 1 Processed: 39 Uptime: 5m 43s PID: 28430 Sessions: 1 Processed: 7 Uptime: 5m 43s These are my passenger related settings in nginx.conf http { passenger_root /opt/ruby/lib/ruby/gems/1.8/gems/passenger-2.2.11; passenger_ruby /opt/ruby/bin/ruby; passenger_max_pool_size 30;

    Read the article

  • Building a custom Linux Live CD

    - by Mike Heinz
    Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file!

    Read the article

  • Weird MySQL behavior, seems like a SQL bug

    - by Daniel Magliola
    I'm getting a very strange behavior in MySQL, which looks like some kind of weird bug. I know it's common to blame the tried and tested tool for one's mistakes, but I've been going around this for a while. I have 2 tables, I, with 2797 records, and C, with 1429. C references I. I want to delete all records in I that are not used by C, so i'm doing: select * from i where id not in (select id_i from c); That returns 0 records, which, given the record counts in each table, is physically impossible. I'm also pretty sure that the query is right, since it's the same type of query i've been using for the last 2 hours to clean up other tables with orphaned records. To make things even weirder... select * from i where id in (select id_i from c); DOES work, and brings me the 1297 records that I do NOT want to delete. So, IN works, but NOT IN doesn't. Even worse: select * from i where id not in ( select i.id from i inner join c ON i.id = c.id_i ); That DOES work, although it should be equivalent to the first query (i'm just trying mad stuff at this point). Alas, I can't use this query to delete, because I'm using the same table i'm deleting from in the subquery. I'm assuming something in my database is corrupt at this point. In case it matters, these are all MyISAM tables without any foreign keys, whatsoever, and I've run the same queries in my dev machine and in the production server with the same result, so whatever corruption there might be survived a mysqldump / source cycle, which sounds awfully strange. Any ideas on what could be going wrong, or, even more importantly, how I can fix/work around this? Thanks! Daniel

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • Chrome targeted CSS

    - by Chris
    I have some CSS code that hides the cursor on a web page (it is a client facing static screen with no interaction). The code I use to do this is below: *, html { cursor: url('/web/resources/graphics/blank.cur'), pointer; } Blank.cur is a totally blank cursor file. This code works perfectly well in all browsers when I host the web files on my local server but when I upload to a Windows CE webserver (our production unit) the cursor represents itself as a black box. Odd. After some testing it seems that chrome only has a problem with totally blank cursor files when served from WinCE web server, so I created a blank cursor with one pixel as white, specifically for chrome. How do I then target this CSS rule to chrome specifically? i.e. *, html { cursor: url('/web/resources/graphics/blank.cur'), pointer; } <!--[if CHROME]> *, html { cursor: url('/web/resources/graphics/blankChrome.cur'), pointer; } <![endif]-->

    Read the article

  • Why do I get a Illegal Access Error when running my Android tests?

    - by Janusz
    I get the following stack trace when running my Android tests on the Emulator: java.lang.NoClassDefFoundError: client.HttpHelper at client.Helper.<init>(Helper.java:14) at test.Tests.setUp(Tests.java:15) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:164) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:151) at android.test.InstrumentationTestRunner.onStart(InstrumentationTestRunner.java:425) at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:1520) Caused by: java.lang.IllegalAccessError: cross-loader access from pre-verified class at dalvik.system.DexFile.defineClass(Native Method) at dalvik.system.DexFile.loadClass(DexFile.java:193) at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:203) at java.lang.ClassLoader.loadClass(ClassLoader.java:573) at java.lang.ClassLoader.loadClass(ClassLoader.java:532) ... 11 more I run my tests from an extra project. And it seems there are some problems with loading the classes from the other project. I have run the tests before but now they are failing. The project under tests runs without problems. Line 14 of the Helper Class is: this.httpHelper = new HttpHelper(userProfile); I start a HttpHelper class that is responsible for executing httpqueries. I think somehow this helper class is not available anymore, but I have no clue why.

    Read the article

  • Resque: Slow worker startup and Forking

    - by David John
    I'm currently moving my application from a Linode setup to EC2. Redis is currently installed on a remote instance with various worker instances interacting with the queue. Thats all going fantastic. My problem is with the amount of time it takes for a worker to be 'instantiated' and slow forking. Starting a worker will usually take between 30 seconds and a minute(from god.rb starting the worker rake task and the worker actively starting work on the queue). I could live with that, but I've not experienced such a wait time on my current Linode production box so I believe its one of my symptoms to a bigger problem. Next issue is that jobs that took a second or less in my previous environment now seem to take about 5 to 10 times longer.. I'm assuming this must be some sort of issue with my Ubuntu install on EC2? One notable difference is that I'm running REE 1.8.7-2010.01 in my new setup, and REE 1.8.6 on the old Linode boxes. Anyone else experienced these issues?

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >