Search Results

Search found 4231 results on 170 pages for 'pre buildevent'.

Page 136/170 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Is it possible to "trick" PrintScreen, swap out the contents of my form with something else before c

    - by Lasse V. Karlsen
    I have a bit of a challenge. In an earlier version of our product, we had an error message window (last resort, unhandled exception) that showed the exception message, type, stack trace + various bits and pieces of information. This window was printscreen-friendly, in that if the user simply did a printscreen-capture, and emailed us the screenshot, we had almost everything we needed to start diagnosing the problem. However, the form was deemed too technical and "scary" for normal users, so it was toned down to a more friendly one, still showing the error message, but not the stack trace and some of the more gory details that I'd still like to get. In addition, the form was added the capabilities of emailing us a text file containing everything we had before + lots of other technical details as well, basically everything we need. However, users still use PrintScreen to capture the contents of the form and email that back to us, which means I now have a less than optimal amount of information to go on. So I was wondering. Would it be possible for me to pre-render a bitmap the same size as my form, with everything I need on it, detect that PrintScreen was hit and quickly swap out the form contents with my bitmap before capture, and then back again afterwards? And before you say "just educate the users", yes, that's not going to work. These are not out users, they're users at our customers place, so we really cannot tell them to wisen up all that much. Or, barring this, is there a way for me to detect PrintScreen, tell Windows to ignore it, and instead react to it, by dumping the aformentioned prerendered bitmap onto the clipboard ready for placing into an email? The code is C# 3.0 in .NET 3.5, if it matters, but pointers for something to look at/for is good enough.

    Read the article

  • Regular input in ASP.NET

    - by coffeeaddict
    Here's an example of a regular standard HTML input for my radiobuttonlist: <label><input type="radio" name="rbRSelectionGroup" checked value="0" />None</label> <asp:Repeater ID="rptRsOptions" runat="server"> <ItemTemplate> <div> <label><input type="radio" name="rbRSelectionGroup" value='<%# ((RItem)Container.DataItem).Id %>' /><%# ((RItem)Container.DataItem).Name %></label> </div> </ItemTemplate> </asp:Repeater> I removed some stuff for this thread, one being I put an r for some name that I do not want to expose here so just an fyi. Now, I would assume that this would or should happen: Page loads the first time, the None radio button is checked / defaulted I go and select a different radiobutton in this radiobutton list I do an F5 refresh in my browser The None radio button is pre-selected again after it has come back from the refresh but #4 is not happening. It's retaining the radiobutton that I selected in #2 and I don't know why. I mean in regular HTML it's stateless. So what could be holding this value? I want this to act like a normal input button. I know the question of "why not use an ASP.NET control" will come up. Well there are 2 reasons: The stupid radiobuttonlist bug that everyone knows about I just want to brush up more on standard input tags We are not moving to MVC so this is as close as I'll get and it's ok, because the rest of the team is on par with having mixed ASP.NET controls with standard HTML controls in our pages Anyway my main question here is I'm surprised that it's retaining the change in selection after postback.

    Read the article

  • QT QSslError being signaled with the error code set to NoError

    - by Nantucket
    My Problem I compiled OpenSSL into QT to enable OpenSSL support. Everything appeared to go correctly in the compile. However, when I try to use the official HTTP example application that can be found here, everytime I try to download an https page, it will signal two QSslError, each with contents NoError. The types of QSslErrors, including NoError, are documented here, poorly. There is no explanation on why they even included an error type called NoError, or what it means. Bizarrely, the NoError error code seems to be true, as it downloads the remote https document perfectly even while signaling the error. Does anyone have any idea what this means and what could possibly be causing it? Optional Background Reading Here is the relevant part of the code from the example app (this is connected to the network connection's sslErrors signal by the constructor): void HttpWindow::sslErrors(QNetworkReply*,const QList<QSslError> &errors) { QString errorString; foreach (const QSslError &error, errors) { if (!errorString.isEmpty()) errorString += ", "; errorString += error.errorString(); } if (QMessageBox::warning(this, tr("HTTP"), tr("One or more SSL errors has occurred: %1").arg(errorString), QMessageBox::Ignore | QMessageBox::Abort) == QMessageBox::Ignore) { reply->ignoreSslErrors(); } } I have tried the old version of this example, and it produced the same result. I have tried OpenSSL 1.0.0a and 0.9.8o. I have tried tried compiling OpenSSL myself, I have tried using pre-compiled versions of OpenSSL from the net. All produce the same result. If this were my first time using QT with SSL, I would almost think this is the intended result (even though their example application is popping up error warning message windows), if not for the fact that last time I played with QT, using what would now be an old version of QT with an old version of SSL, I distinctly remember everything working fine with no error windows. My system is running Windows 7 x64.

    Read the article

  • Downloading a file from a PHP page in C#

    - by FoxyShadoww
    Okay, we have a PHP script that creates an download link from a file and we want to download that file via C#. This works fine with progress etc but when the PHP page gives an error the program downloads the error page and saves it as the requested file. Here is the code we have atm: PHP Code: <?php $path = 'upload/test.rar'; if (file_exists($path)) { $mm_type="application/octet-stream"; header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Cache-Control: public"); header("Content-Description: File Transfer"); header("Content-Type: " . $mm_type); header("Content-Length: " .(string)(filesize($path)) ); header('Content-Disposition: attachment; filename="'.basename($path).'"'); header("Content-Transfer-Encoding: binary\n"); readfile($path); exit(); } else { print 'Sorry, we could not find requested download file.'; } ?> C# Code: private void btnDownload_Click(object sender, EventArgs e) { string url = "http://***.com/download.php"; WebClient client = new WebClient(); client.DownloadFileCompleted += new AsyncCompletedEventHandler(client_DownloadFileCompleted); client.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged); client.DownloadFileAsync(new Uri(url), @"c:\temp\test.rar"); } private void ProgressChanged(object sender, DownloadProgressChangedEventArgs e) { progressBar.Value = e.ProgressPercentage; } void client_DownloadFileCompleted(object sender, AsyncCompletedEventArgs e) { MessageBox.Show(print); }

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • How can I handle dynamic calculated attributes in a model in Django?

    - by bullfish
    In Django I calculate the breadcrumb (a list of fathers) for an geographical object. Since it is not going to change very often, I am thinking of pre calculating it once the object is saved or initialized. 1.) What would be better? Which solution would have a better performance? To calculate it at _init_ or to calculate it when the object is saved (the object takes about 500-2000 characters in the DB)? 2.) I tried to overwrite the _init_ or save() methods but I don't know how to use attributes of the just saved object. Accessing *args, **kwargs did not work. How can I access them? Do I have to save, access the father and then save again? 3.) If I decide to save the breadcrumb. Whats the best way to do it? I used http://www.djangosnippets.org/snippets/1694/ and have crumb = PickledObjectField(). Thats the method to calculate the attribute crumb() def _breadcrumb(self): breadcrumb = [ ] x = self while True: x = x.father try: if hasattr(x, 'country'): breadcrumb.append(x.country) elif hasattr(x, 'region'): breadcrumb.append(x.region) elif hasattr(x, 'city'): breadcrumb.append(x.city) else: break except: break breadcrumb.reverse() return breadcrumb Thats my save-Method: def save(self,*args, **kwargs): # how can I access the father ob the object? father = self.father # does obviously not work father = kwargs['father'] # does not work either # the breadcrumb gets calculated here self.crumb = self._breadcrumb(father) super(GeoObject, self).save(*args,**kwargs) Please help me out. I am working on this for days now. Thank you.

    Read the article

  • Most cost effective way to target multiple mobile platforms

    - by niidto
    Hi I have been given the tasks of speccing a mobile application, which will need to run on approx. 1000 devices. These devices already exist, and consist of iPhones, BlackBerrys, Androids, Windows Mobile and Netbooks. The application will have simple reporting capability, and a collection of forms. Anyway, the obvious solution would be to develop some browser based solution, although given the occasionally connected nature of the devices, there's a potential for data to get lost / not saved. So instead of creating a complex application for each platform, I was thinking we could build what is effectively a form generator, with basic offline storage capability (text files), designed to run on each device, and have the device generate a form, based on for example an XML file that it could request from a server somewhere, resulting in minimal specialist development costs, and the ability to run most of the logic from the server end, with the devices being dumb clients that render forms and upload the data when there is an available connection. Anyway, my question summarised is, how have you made the decision on supporting multiple devices for your application. Is this always an unavoidable problem, and you just have to make the call to support 1 or 2, or pay for developers to write code for each platform, or alternatively supply pre-installed devices to the company? Many thanks James

    Read the article

  • How to deal with clients and iterations in Agile team?

    - by Ondrej Slinták
    This thread is a follow up to my previous one. It's in fact 2 questions, so I hope no one minds, as they are dependent on each other. We are starting a new project at work and we consider it as a great opportunity to try Agile techniques in action. We had a brainstorming about ideas we read in several books and articles, and came up with concept that would suit us the best: 2 weeks iteration, followed by call with clients who would choose what stuff they want to have in next iteration. I just have few more questions, which we couldn't figure out ourselves. What to do in the first iteration? What to, generally, do in the first few iterations if we start from the scratch? Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality? What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work? How to communicate with clients? Our initial thought it to set the process to something like this: Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication? Any thoughts are welcome! Thanks in advance.

    Read the article

  • How to eliminate tearing from animation?

    - by MusiGenesis
    I'm running an animation in a WinForms app at 18.66666... frames per second (it's synced with music at 140 BPM, which is why the frame rate is weird). Each cel of the animation is pre-calculated, and the animation is driven by a high-resolution multimedia timer. The animation itself is smooth, but I am seeing a significant amount of "tearing", or artifacts that result from cels being caught partway through a screen refresh. When I take the set of cels rendered by my program and write them out to an AVI file, and then play the AVI file in Windows Media Player, I do not see any tearing at all. I assume that WMP plays the file smoothly because it uses DirectX (or something else) and is able to synchronize the rendering with the screen's refresh activity. It's not changing the frame rate, as the animation stays in sync with the audio. Is this why WMP is able to render the animation without tearing, or am I missing something? Is there any way I can use DirectX (or something else) in order to enable my program to be aware of where the current scan line is, and if so, is there any way I can use that information to eliminate tearing without actually using DirectX for displaying the cels? Or do I have to fully use DirectX for rendering in order to deal with this problem? Update: forgot a detail. My app renders each cell onto a PictureBox using Graphics.DrawImage. Is this significantly slower than using BitBlt, such that I might eliminate at least some of the tearing by using BitBlt?

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • MySQL default value based on view

    - by Jake
    Basically I have a bunch of views based on a simple discriminator column (eg. CREATE VIEW tablename AS SELECT * FROM tablename WHERE discrcolumn = "discriminator value"). Upon inserting a new row into this view, it should insert "discriminator value" into discrcolumn. I tried this, but apparently MySQL doesn't figure this out itself, as it throws an error "Field of view viewname underlying table does not have a default value". The discriminator column is set to NOT NULL of course. How do I mend this? Perhaps a pre-insert trigger? UPDATE: Triggers won't work on views, see below comment. Would it work to create a trigger on the table which uses a variable, and set that variable at establishing the connection? For each connection the value of that variable would be the same, but it could differ from other connections. EDIT: This appears to work... Setup: CREATE TRIGGER insert_[tablename] BEFORE INSERT ON [tablename] FOR EACH ROW SET NEW.[discrcolumn] = @variable Runtime: SET @variable = [descrvalue]; INSERT INTO [viewname] ([columnlist]) VALUES ([values]);

    Read the article

  • Can not access response.body inside after filter block in Sinatra 1.0

    - by Petr Vostrel
    I'm struggling with a strange issue. According to http://github.com/sinatra/sinatra (secion Filters) a response object is available in after filter blocks in Sinatra 1.0. However the response.status is correctly accessible, I can not see non-empty response.body from my routes inside after filter. I have this rackup file: config.ru require 'app' run TestApp Then Sinatra 1.0.b gem installed using: gem install --pre sinatra And this is my tiny app with a single route: app.rb require 'rubygems' require 'sinatra/base' class TestApp < Sinatra::Base set :root, File.dirname(__FILE__) get '/test' do 'Some response' end after do halt 500 if response.empty? # used 500 just for illustation end end And now, I would like to access the response inside the after filter. When I run this app and access /test URL, I got a 500 response as if the response is empty, but the response clearly is 'Some response'. Along with my request to /test, a separate request to /favicon.ico is issued by the browser and that returns 404 as there is no route nor a static file. But I would expect the 500 status to be returned as the response should be empty. In console, I can see that within the after filter, the response to /favicon.ico is something like 'Not found' and response to /test really is empty even though there is response returned by the route. What do I miss?

    Read the article

  • Python Error-Checking Standard Practice

    - by chaindriver
    Hi, I have a question regarding error checking in Python. Let's say I have a function that takes a file path as an input: def myFunction(filepath): infile = open(filepath) #etc etc... One possible precondition would be that the file should exist. There are a few possible ways to check for this precondition, and I'm just wondering what's the best way to do it. i) Check with an if-statement: if not os.path.exists(filepath): raise IOException('File does not exist: %s' % filepath) This is the way that I would usually do it, though the same IOException would be raised by Python if the file does not exist, even if I don't raise it. ii) Use assert to check for the precondition: assert os.path.exists(filepath), 'File does not exist: %s' % filepath Using asserts seems to be the "standard" way of checking for pre/postconditions, so I am tempted to use these. However, it is possible that these asserts are turned off when the -o flag is used during execution, which means that this check might potentially be turned off and that seems risky. iii) Don't handle the precondition at all This is because if filepath does not exist, there will be an exception generated anyway and the exception message is detailed enough for user to know that the file does not exist I'm just wondering which of the above is the standard practice that I should use for my codes.

    Read the article

  • Debugged Program Window Won't Close

    - by Marc Bernier
    Hi, I'm using VS 2008 on a 64-bit XP machine. I'm debugging a 32-bit C++ DLL via a console program. The DLL and EXE projects are contained in the same SLN so that I can modify the DLL as I test. What happens is that every once in a while I kill the program with Debug | Stop Debugging (Shift-F5). VS stops the program, but the console window stays open! If I'm sitting at a breakpoint and hit Shift-F5, it will terminate properly, but if the program is running full-tilt when I stop it, I often see this instead. The big problem is that I can't close these zombie windows. Using End Task in Task Manager does nothing (no message, no nothing). When I shut down the machine, it is unable to due to the orphans and I have to resort to actually turning off the power. I think this is connected to having the DLL and EXE project in the same SLN, as for months I worked on this project in 2 VS instances, one for the DLL and the other for the EXE. I would continually jump back and forth between the windows as I worked. This problem never happened until I put the two projects into a single SLN. The single SLN works a lot better, but this anomaly is very irritating. Any ideas anyone? UPDATE After a bit of searching (here), I found that it appears to have to do with one of the updates from last Tuesday (KB977165 or KB978037). Thank you Microsoft for your excellent pre-release testing.

    Read the article

  • How to check if two records have a self-referencing relation?

    - by Machine
    Consider the following schema with users and their collegues (friends): Users User: columns: user_id: name: user_id as userId type: integer(8) unsigned: 1 primary: true autoincrement: true first_name: name: first_name as firstName type: string(45) notnull: true last_name: name: last_name as lastName type: string(45) notnull: true email: type: string(45) notnull: true unique: true relations: Collegues: class: User local: invitor foreign: invitee refClass: CollegueStatus equal: true onDelete: CASCADE onUpdate: CASCADE Join table: CollegueStatus: columns: invitor: type: integer(8) unsigned: 1 primary: true invitee: type: integer(8) unsigned: 1 primary: true status: type: enum(8) values: [pending, accepted, denied] default: pending notnull: true Now, let's say I two records, one for the user making a HTTP request (the logged in user), and one record for a user he wants to send a message to. I want to check if these users are collegues. Questions: Does Doctrine have any pre-build functionality to check if two records with with self-relations are related? If not, how would you write a method to check this? Where would you put said method? (In the User-class, UserTable-class etc) I could probably do something like this: public function (User $user1, User $user2) { // Ensure we load collegues if $user1 was fetched with DQL that // doesn't load this relation $collegues = $user1->get('Collegues'); $areCollegues = false; foreach($collegues as $collegue) { if($collegue['userId'] === $user2['userId']) { $areCollegues = true; break; } } return $areCollegues; } But this looks a neither efficient nor pretty. I just feel that it should be solved already for self-referencing relations to be nice to use.

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • Refreshing a JPA result list which is bound with a jTable

    - by exhuma
    First off, I hage to write this on my mobile. But I'll try to format it properly. In Netbeans I created a jTable and bound it's values to a JPA result set. This works great. The qery contains a param which i set in the pre-create box of the "query result" component. So just before netbeans creates the query result i write this: myQuery.setParameter("year", "1997"); This works fine. Now, I have an event handler which is supposed to change the param and display the new values in the table. So I do this: myQuery.setParameter("year", "2005"); myResultList.clear(); myResultList.addAll(myQuery.getResultList()); jTable1.updateUI(); This works, but it feels wrong to me. Note: the result set is bound to the table. So I was hoping there was something like this: myQuery.setParameter("year", "2005"); myResultList.refresh(); Is there something like this?

    Read the article

  • Build a Visual Studio Project without access to referenced dlls

    - by David Reis
    I have a project which has a set of binary dependencies (assembly dlls for which I do no have the source code). At runtime those dependencies are required pre-installed on the machine and at compile time they are required in the source tree, e,g in a lib folder. As I'm also making source code available for this program I would like to enable a simple download and build experience for it. Unfortunately I cannot redistribute the dlls, and that complicates things, since VS wont link the project without access to the referenced dlls. Is there anyway to enable this project to be built and linked in absence of the real referenced dlls? Maybe theres a way to tell VS to link against an auto generated stub of the dll, so that it can rebuild without the original? Maybe there's a third party tool that will do this? Any clues or best practices at all in this area? I realize the person must have access to the dlls to run the code, so it makes sense that he could add them to the build process, but I'm just trying to save them the pain of collecting all the dlls and placing them in the lib folder manually.

    Read the article

  • Passing arguments and values from HTML to jQuery (events)

    - by Jaroslav Moravec
    What is the practice to pass arguments from HTML to jQuery events function. For example getting id of row from db: <tr class="jq_killMe" id="thisItemId-id"> ... </tr> and jQuery: $(".jq_killMe").click(function () { var tmp = $(this).attr('id).split("-"); var id = tmp[0] // ... } What's the best practise, if I want to pass more than one argument? Is it better not to use jQuery? For example: <tr onclick="killMe('id')"> ... </tr> I didn't find the answer on my question, I will be glad even for links. Thanks. Edit (pre solution) So you suggested two methods to do that: Add custom attributes to element (XHTML) Use attribute ID and parse it by regex Attribute data-* attributes in HTML5 Use hidden children elements I like first solution, but... I would like to (I have to (employer)) produce valid code. Here is a nice question and answers: http://stackoverflow.com/questions/994856/so-what-if-custom-html-attributes-arent-valid-xhtml And the second is not so pretty as the first, but valid. So the compromise is... The third is the solution for future, but here is a lot of CMS where we have to use XHTML or HTML4. (And HTML5 is the long process)

    Read the article

  • Using Ant how can I dex a directory of jars?

    - by cbeaudin
    I have a directory full of jars (felix bundles). I want to iterate through all of these jars and create dex'd versions. My intent is to deploy each of these dex'd jars as standalone apk's since they are bundles. Feel free to straighten me out if I am approaching this from the wrong direction. This first part is just to try and create a corresponding .dex file for each jar. However when I run this I am getting a "no resources specified" error coming out of Ant. Is this the right approach, or is there a simpler approach to just input a jar and output a dex'd version of that jar? The ${file} is valid as it is spitting out the name of the file in the echo command. <target name="dexBundles" description="Run dex on all the bundles"> <taskdef resource="net/sf/antcontrib/antlib.xml" classpath="${basedir}/libs/ant-contrib.jar" /> <echo>Starting</echo> <for param="file"> <path> <fileset dir="${pre.dex.dir}"> <include name="**/*.jar" /> </fileset> </path> <sequential> <echo message="@{file}" /> <echo>Converting jar file @{file} into ${post.dex.dir}/@{file}.class...</echo> <apply executable="${dx}" failonerror="true" parallel="true" verbose="true"> <arg value="--dex" /> <arg value="--output=${post.dex.dir}/${file}.dex" /> <arg path="@{file}" /> </apply> </sequential> </for> <echo>Finished</echo> </target>

    Read the article

  • What are the common compliance standards for software products?

    - by Jay
    This is a very generic question about software products. I would like to know what compliance standards are applicable to any software product. I know that question gives away nothing. So, here is an example to what I am referring to. CiSecurity Security Certification/Compliance lists out products ceritified by them to be compliant to the standards published at their website, i.e, cisecurity.org. Compliance could be as simple as answering a questionnaire for your product and approved by a thirdparty like cisecurity or it could apply to your whole organization, for instance, PCI-DSS compliance. I would be very interested in knowing the standards that products you know/designed/created, comply to. To give you the context behind this question: I am the developer of a data-masking tool. The said tool helps mask onscreen html text in a banking web application using filters. So, for instance, if the bank application lists out user information with ssn, my product when integrated with the banking product, automatically identifies ssn pattern and masks it into a pre-defined format.So, I have my product marketing team wanting more buzz words like compliance to be able to sell it to more banking clients. Hence, understanding "compliances that apply to products" is a key research item for me at this point. By which I meant, security compliances. Appreciate all your help and suggestions.

    Read the article

  • XCode 4.4 bundle version updates not picked up until subsequent build

    - by Mark Struzinski
    I'm probably missing something simple here. I am trying to auto increment my build number in XCode 4.4 only when archiving my application (in preparation for a TestFlight deployment). I have a working shell script that runs on the target and successfully updates the info.plist file for each build. My build configuration for archiving is name 'Ad-Hoc'. Here is the script: if [ $CONFIGURATION == Ad-Hoc ]; then echo "Ad-Hoc build. Bumping build#..." plist=${PROJECT_DIR}/${INFOPLIST_FILE} buildnum=$(/usr/libexec/PlistBuddy -c "Print CFBundleVersion" "${plist}") if [[ "${buildnum}" == "" ]]; then echo "No build number in $plist" exit 2 fi buildnum=$(expr $buildnum + 1) /usr/libexec/Plistbuddy -c "Set CFBundleVersion $buildnum" "${plist}" echo "Bumped build number to $buildnum" else echo $CONFIGURATION " build - Not bumping build number." fi This script updates the plist file appropriately and is reflected in XCode each time I archive. The problem is that the .ipa file that comes out of the archive process is still showing the previous build number. I have tried the following solutions with no success: Clean before build Clean build folder before build Move Run Script phase to directly after the Target Dependencies step in Build Phases Adding the script as a Run Script action in my scheme as a pre-action No matter what I do, when I look at the build log, I see that the info.plist file is being processed as one of the very first steps. It is always prior to my script running and updating the build number, which is, I assume, why the build number is never current in the .ipa file. Is there a way to force the Run Script phase to run before the info.plist file is processed?

    Read the article

  • How do I use price data in one table for a calculation that is stored in another table?

    - by shane
    I'm still leanring PHP/MySQL but have learned quite a bit thanks to codies on StackOverflow. I'm trying to setup a sort of room reservations system using two tables: SETUP: Room price table: Has, prices for a type room a client may want to rent as well as the dates (day of week) they wish to use it. Pricing varies based on day of the week and per room. I've setup a different table for each room type as each room type carries different pricing for each day of the week. So, There is an Alpha room table, Bravo room, etc. Within Alpha table are headers for the days of the week with pricing pre-entered into the rows. Client info table: Has the name, address, date of room use, etc data for the specific client. EXAMPLE: Alpha-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Bravo-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Client data table: ClientName; date-of-room-use; address; day_subtotal; grand_total. QUESTION: I'm trying to find PHP code that will: look at the date of room use in the client data table, look up the associated cost for that date in the specific room pricing table, record that unit cost in the day subtotal of the client data table and sum a grand total in the grand total row of the client data table (assuming the room may be used more than one day by the customer). I know there's something to do with join but I'm finding it difficult to grasp the concept and, if someone can demonstrate using this example, I think I will have a better understanding of how to work this sort of transaction. Thank you ALL in advance for your suggestions or alternatvie approaches.

    Read the article

  • iphone table view check mark accessory problem

    - by Pankaj Kainthla
    I have a table with 50 rows. I want to select particular rows with checkmark accessory but when i select some rows and scroll down the table then i see pre checked rows. I know that table cell are reused but i want to emit this problem what can i do about this? - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // return [array count]; return 50; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; TableViewCell *cell = (TableViewCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[TableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.text=[NSString stringWithFormat:@"%i",[indexPath row]]; return cell; } // Override to support row selection in the table view. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath]; if (cell.accessoryType == UITableViewCellAccessoryNone) { cell.accessoryType = UITableViewCellAccessoryCheckmark; } else { cell.accessoryType = UITableViewCellAccessoryNone; }

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >