Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 285/344 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Unity Framework constructor parameters in MVC

    - by ubersteve
    I have an ASP.NET MVC3 site that I want to be able to use different types of email service, depending on how busy the site is. Consider the following: public interface IEmailService { void SendEmail(MailMessage mailMessage); } public class LocalEmailService : IEmailService { public LocalEmailService() { // no setup required } public void SendEmail(MailMessage mailMessage) { // send email via local smtp server, write it to a text file, whatever } } public class BetterEmailService : IEmailService { public BetterEmailService (string smtpServer, string portNumber, string username, string password) { // initialize the object with the parameters } public void SendEmail(MailMessage mailMessage) { //actually send the email } } Whilst the site is in development, all of my controllers will send emails via the LocalEmailService; when the site is in production, they will use the BetterEmailService. My question is twofold: 1) How exactly do I pass the BetterEmailService constructor parameters? Is it something like this (from ~/Bootstrapper.cs): private static IUnityContainer BuildUnityContainer() { var container = new UnityContainer(); container.RegisterType<IEmailService, BetterEmailService>("server name", "port", "username", "password"); return container; } 2) Is there a better way of doing that - i.e. putting those keys in the web.config or another configuration file so that the site would not need to be recompiled to switch which email service it was using? Many thanks!

    Read the article

  • Unable to turn off notice errors in PHP 5.3.2

    - by knkk
    Hi everyone, I recently migrated to PHP 5.3.2, and realized that I am unable to turn off notice errors in my site now. I went to php.ini, and in these lines: ; Common Values: ; E_ALL & ~E_NOTICE (Show all errors, except for notices and coding standards warnings.) ; E_ALL & ~E_NOTICE | E_STRICT (Show all errors, except for notices) ; E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR (Show only errors) ; E_ALL | E_STRICT (Show all errors, warnings and notices including coding standards.) ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; http://php.net/error-reporting error_reporting = E_ALL & ~E_NOTICE ...I've tried setting everything (and I restart apache each time), but I am unable to get rid of notices. The only way I'm able to get rid of notice errors is by setting : display_errors = Off That is, of course, not something I can do since I need to see errors to fix them, and I would like to see errors on the webpage that I am coding rather than log them somewhere. Can someone help? Is this a bug in PHP 5.3.2 or something I am doing wrong? Thank you very much for your time! P. S. Also, would anyone know how I can get PHP 5.3.2 to support the .php3 extension?

    Read the article

  • QT QSslError being signaled with the error code set to NoError

    - by Nantucket
    My Problem I compiled OpenSSL into QT to enable OpenSSL support. Everything appeared to go correctly in the compile. However, when I try to use the official HTTP example application that can be found here, everytime I try to download an https page, it will signal two QSslError, each with contents NoError. The types of QSslErrors, including NoError, are documented here, poorly. There is no explanation on why they even included an error type called NoError, or what it means. Bizarrely, the NoError error code seems to be true, as it downloads the remote https document perfectly even while signaling the error. Does anyone have any idea what this means and what could possibly be causing it? Optional Background Reading Here is the relevant part of the code from the example app (this is connected to the network connection's sslErrors signal by the constructor): void HttpWindow::sslErrors(QNetworkReply*,const QList<QSslError> &errors) { QString errorString; foreach (const QSslError &error, errors) { if (!errorString.isEmpty()) errorString += ", "; errorString += error.errorString(); } if (QMessageBox::warning(this, tr("HTTP"), tr("One or more SSL errors has occurred: %1").arg(errorString), QMessageBox::Ignore | QMessageBox::Abort) == QMessageBox::Ignore) { reply->ignoreSslErrors(); } } I have tried the old version of this example, and it produced the same result. I have tried OpenSSL 1.0.0a and 0.9.8o. I have tried tried compiling OpenSSL myself, I have tried using pre-compiled versions of OpenSSL from the net. All produce the same result. If this were my first time using QT with SSL, I would almost think this is the intended result (even though their example application is popping up error warning message windows), if not for the fact that last time I played with QT, using what would now be an old version of QT with an old version of SSL, I distinctly remember everything working fine with no error windows. My system is running Windows 7 x64.

    Read the article

  • Regular input in ASP.NET

    - by coffeeaddict
    Here's an example of a regular standard HTML input for my radiobuttonlist: <label><input type="radio" name="rbRSelectionGroup" checked value="0" />None</label> <asp:Repeater ID="rptRsOptions" runat="server"> <ItemTemplate> <div> <label><input type="radio" name="rbRSelectionGroup" value='<%# ((RItem)Container.DataItem).Id %>' /><%# ((RItem)Container.DataItem).Name %></label> </div> </ItemTemplate> </asp:Repeater> I removed some stuff for this thread, one being I put an r for some name that I do not want to expose here so just an fyi. Now, I would assume that this would or should happen: Page loads the first time, the None radio button is checked / defaulted I go and select a different radiobutton in this radiobutton list I do an F5 refresh in my browser The None radio button is pre-selected again after it has come back from the refresh but #4 is not happening. It's retaining the radiobutton that I selected in #2 and I don't know why. I mean in regular HTML it's stateless. So what could be holding this value? I want this to act like a normal input button. I know the question of "why not use an ASP.NET control" will come up. Well there are 2 reasons: The stupid radiobuttonlist bug that everyone knows about I just want to brush up more on standard input tags We are not moving to MVC so this is as close as I'll get and it's ok, because the rest of the team is on par with having mixed ASP.NET controls with standard HTML controls in our pages Anyway my main question here is I'm surprised that it's retaining the change in selection after postback.

    Read the article

  • Downloading a file from a PHP page in C#

    - by FoxyShadoww
    Okay, we have a PHP script that creates an download link from a file and we want to download that file via C#. This works fine with progress etc but when the PHP page gives an error the program downloads the error page and saves it as the requested file. Here is the code we have atm: PHP Code: <?php $path = 'upload/test.rar'; if (file_exists($path)) { $mm_type="application/octet-stream"; header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Cache-Control: public"); header("Content-Description: File Transfer"); header("Content-Type: " . $mm_type); header("Content-Length: " .(string)(filesize($path)) ); header('Content-Disposition: attachment; filename="'.basename($path).'"'); header("Content-Transfer-Encoding: binary\n"); readfile($path); exit(); } else { print 'Sorry, we could not find requested download file.'; } ?> C# Code: private void btnDownload_Click(object sender, EventArgs e) { string url = "http://***.com/download.php"; WebClient client = new WebClient(); client.DownloadFileCompleted += new AsyncCompletedEventHandler(client_DownloadFileCompleted); client.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged); client.DownloadFileAsync(new Uri(url), @"c:\temp\test.rar"); } private void ProgressChanged(object sender, DownloadProgressChangedEventArgs e) { progressBar.Value = e.ProgressPercentage; } void client_DownloadFileCompleted(object sender, AsyncCompletedEventArgs e) { MessageBox.Show(print); }

    Read the article

  • How to test method call order with Moq

    - by Finglas
    At the moment I have: [Test] public void DrawDrawsAllScreensInTheReverseOrderOfTheStack() { // Arrange. var screenMockOne = new Mock<IScreen>(); var screenMockTwo = new Mock<IScreen>(); var screens = new List<IScreen>(); screens.Add(screenMockOne.Object); screens.Add(screenMockTwo.Object); var stackOfScreensMock = new Mock<IScreenStack>(); stackOfScreensMock.Setup(s => s.ToArray()).Returns(screens.ToArray()); var screenManager = new ScreenManager(stackOfScreensMock.Object); // Act. screenManager.Draw(new Mock<GameTime>().Object); // Assert. screenMockOne.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock one"); screenMockTwo.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock two"); } But the order in which I draw my objects in the production code does not matter. I could do one first, or two it doesn't matter. However it should matter as the draw order is important. How do you (using Moq) ensure methods are called in a certain order? Edit I got rid of that test. The draw method has been removed from my unit tests. I'll just have to manually test it works. The reversing of the order though was taken into a seperate test class where it was tested so it's not all bad. Thanks for the link about the feature they are looking into. I sure hope it gets added soon, very handy.

    Read the article

  • Max Daily Budget exceeded and Billing Status "Changing Daily Budget"

    - by draftpik
    We've exceeded the Max Daily Budget for our app, but we can't increase the budget due to a serious flaw in Google's billing system. Google App Engine and Google Wallet do not have very capable support for multiple sign-in. As a result, I went to change the budget, but it used the wrong Google Wallet account (a different Google Account I was signed in as). I had to go back and try again, but now our GAE app shows the following status: Billing Status: Changing Daily Budget Your account has been locked while we process your budget changes. If you were redirected to Google Checkout but did not complete the process, your settings will remain unchanged. (You will be able to make changes to your budget settings again once the outstanding payment is processed.) Now I'm completely prevented from making any billing changes, our app is shut off (over quota), and I have NOTHING I can do to fix it. This is a seriously fundamental flaw in App Engine's billing system and Google Wallet integration. Has anyone run into this before? Is there a workaround anyone is aware of? Right now, our production app is completely down thanks to this issue. Any help you can offer would be greatly appreciated? If you're from Google and you might be able to help on the backend, our app id is "nhldraftpik". Thanks! Brian

    Read the article

  • How can I handle dynamic calculated attributes in a model in Django?

    - by bullfish
    In Django I calculate the breadcrumb (a list of fathers) for an geographical object. Since it is not going to change very often, I am thinking of pre calculating it once the object is saved or initialized. 1.) What would be better? Which solution would have a better performance? To calculate it at _init_ or to calculate it when the object is saved (the object takes about 500-2000 characters in the DB)? 2.) I tried to overwrite the _init_ or save() methods but I don't know how to use attributes of the just saved object. Accessing *args, **kwargs did not work. How can I access them? Do I have to save, access the father and then save again? 3.) If I decide to save the breadcrumb. Whats the best way to do it? I used http://www.djangosnippets.org/snippets/1694/ and have crumb = PickledObjectField(). Thats the method to calculate the attribute crumb() def _breadcrumb(self): breadcrumb = [ ] x = self while True: x = x.father try: if hasattr(x, 'country'): breadcrumb.append(x.country) elif hasattr(x, 'region'): breadcrumb.append(x.region) elif hasattr(x, 'city'): breadcrumb.append(x.city) else: break except: break breadcrumb.reverse() return breadcrumb Thats my save-Method: def save(self,*args, **kwargs): # how can I access the father ob the object? father = self.father # does obviously not work father = kwargs['father'] # does not work either # the breadcrumb gets calculated here self.crumb = self._breadcrumb(father) super(GeoObject, self).save(*args,**kwargs) Please help me out. I am working on this for days now. Thank you.

    Read the article

  • How are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • What happens when value types are created?

    - by Bob
    I'm developing a game using XNA and C# and was attempting to avoid calling new struct() type code each frame as I thought it would freak the GC out. "But wait," I said to myself, "struct is a value type. The GC shouldn't get called then, right?" Well, that's why I'm asking here. I only have a very vague idea of what happens to value types. If I create a new struct within a function call, is the struct being created on the stack? Will it simply get pushed and popped and performance not take a hit? Further, would there be some memory limit or performance implications if, say, I need to create many instances in a single call? Take, for instance, this code: spriteBatch.Draw(tex, new Rectangle(x, y, width, height), Color.White); Rectangle in this case is a struct. What happens when that new Rectangle is created? What are the implications of having to repeat that line many times (say, thousands of times)? Is this Rectangle created, a copy sent to the Draw method, and then discarded (meaning no memory getting eaten up the more Draw is called in that manner in the same function)? P.S. I know this may be pre-mature optimization, but I'm mostly curious and wish to have a better understanding of what is happening.

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • Manually filling opcode cache for entire app using apc_compile_file, then switching to new release.

    - by Ben
    Does anyone have a great system, or any ideas, for doing as the title says? I want to switch production version of web app-- written in PHP and served by Apache-- from release 1234 to release 1235, but before that happens, have all files already in the opcode cache (APC). Then after the switch, remove the old cache entries for files from release 1234. As far as I can think of there are three easy ways of atomically switching from one version to the next. Have a symbolic link, for example /live, that is always the document root but is changed to point from one version to the next. Similarly, have a directory /live that is always the document root, but use mv live oldversion && mv newversion live to switch to new version. Edit apache configuration to change the document root to newversion, then restart apache. I think it is preferable not to have to do 3, but I can't think of anyway to precompile all php files AND use 1 or 2 to switch release. So can someone either convince me its okay to rely on option 3, or tell me how to work with 1 or 2, or reveal some other option I am not thinking of?

    Read the article

  • Strange build issue using Flex Mojo. Looking for troubleshooting suggestions.

    - by WeeJavaDude
    I have ran into a strange issue and I was hoping for some suggeestion on how to attack the problem. Here is the environment. 1) We develop locally using Flex Builder. 2) We use QuickBuild with FlexMojo 3.4.2 for test builds and production 3) In both cases we don't believe optimization is enabled. What we are seeing is some strange behavior relating to the Ctrl-Enter key when doing testing on IE only in our test environment but not locally. By copying some files over locally I have narrowed the issue down to swf files differences. We do see a difference in the size of swf files in our test environment vs our local environments. Couple things that would help me in troubleshooting would be. 1) Is there a way to know what exactly is in the SWF file? What SWCs are included. 2) How does one compare compile settings between a maven mojo configuration and Flex IDE envioronment? Any thoughts or opinions would be very helpful.

    Read the article

  • "You have already activated" message even when using bundle exec

    - by juanpastas
    I am installing gems in my Gemfile in shared path as Capistrano does by default, and when I run: bundle exec rake assets:precompile RAILS_ENV=production I get: You have already activated rake 0.9.2.2, but your Gemfile requires rake 10.0.4. Using bundle exec may solve this. See that: cat Gemfile.lock | grep rake returns: rake (>= 0.8.7) rake (10.0.4) This is my gem environment output: - RUBYGEMS VERSION: 1.8.24 - RUBY VERSION: 1.9.3 (2013-06-27 patchlevel 448) [x86_64-linux] - INSTALLATION DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - RUBY EXECUTABLE: /opt/bitnami/ruby/bin/ruby - EXECUTABLE DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gemhome" => "/home/bitnami/my_app/shared/bundle/ruby/1.9.1/" - "gempath" => ["/home/bitnami/my_app/shared/bundle/ruby/1.9.1/"] - REMOTE SOURCES: - http://rubygems.org/ Update which -a rake /opt/bitnami/rvm/bin/rake /opt/bitnami/ruby/bin/rake Update 2 I tried giving full path to rake, but same problem

    Read the article

  • Is it possible to "trick" PrintScreen, swap out the contents of my form with something else before c

    - by Lasse V. Karlsen
    I have a bit of a challenge. In an earlier version of our product, we had an error message window (last resort, unhandled exception) that showed the exception message, type, stack trace + various bits and pieces of information. This window was printscreen-friendly, in that if the user simply did a printscreen-capture, and emailed us the screenshot, we had almost everything we needed to start diagnosing the problem. However, the form was deemed too technical and "scary" for normal users, so it was toned down to a more friendly one, still showing the error message, but not the stack trace and some of the more gory details that I'd still like to get. In addition, the form was added the capabilities of emailing us a text file containing everything we had before + lots of other technical details as well, basically everything we need. However, users still use PrintScreen to capture the contents of the form and email that back to us, which means I now have a less than optimal amount of information to go on. So I was wondering. Would it be possible for me to pre-render a bitmap the same size as my form, with everything I need on it, detect that PrintScreen was hit and quickly swap out the form contents with my bitmap before capture, and then back again afterwards? And before you say "just educate the users", yes, that's not going to work. These are not out users, they're users at our customers place, so we really cannot tell them to wisen up all that much. Or, barring this, is there a way for me to detect PrintScreen, tell Windows to ignore it, and instead react to it, by dumping the aformentioned prerendered bitmap onto the clipboard ready for placing into an email? The code is C# 3.0 in .NET 3.5, if it matters, but pointers for something to look at/for is good enough.

    Read the article

  • Set "Start With" value for Oracle sequence dynamically

    - by Allan
    I'm trying to create a release script that can be deployed on multiple databases, but where the data can be merged back together at a later date. The obvious way to handle this is to set the sequence numbers for production data sufficiently high in subsequent deployments to prevent collisions. The problem is in coming up with a release script that will accept the environment number and set the "Start With" value of the sequences appropriately. Ideally, I'd like to use something like this: ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] CREATE SEQUENCE seq1 START WITH &EnvironNum*100000; --[more scripting] This doesn't work because you can't evaluate a numeric expression in DDL. Another option is to create the sequences using dynamic SQL via PL/SQL. ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] EXEC execute immediate 'CREATE SEQUENCE seq1 START WITH ' || &EnvironNum*100000; --[more scripting] However, I'd prefer to avoid this solution as I generally try to avoid issuing DDL in PL/SQL. Finally, the third option I've come up with is simply to accept the Start With value as a substitution variable, instead of the environment number. Does anyone have a better thought on how to go about this?

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • PropertyPlaceholderConfigurer vs Filters -- Spring Beans

    - by John
    Hi there. I've got a question regarding the difference between PropertyPlaceholderConfigurer (org.springframework.beans.factory.config.PropertyPlaceholderConfigurer) and normal filters defined in my pom.xml. I've been looking at examples, and it seems that even though filters are defined and marked to be active by default in the pom.xml they still make use of PropertyPlaceholderConfigurer in Spring's applicationContext.xml. This means that the pom.xml has a reference to a filter-LOCAL.properties while applicationContext.xml has a reference to application.properties and they both contain the same settings. Why is that? Is that how it is supposed to be done? I'm able to run the goal mvn jetty:run without the application.properties present, but if I add settings to the application.properties that differ from the filter-LOCAL.properties they don't seem to override. Here's an example of what I mean: pom.xml <profiles <profile <idLOCAL <activation <activeByDefaulttrue </activation <properties <envLOCAL </properties </profile </profiles applicationContext.xml <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" <property name="locations" <list <valueclasspath:application.properties </list </property <property name="ignoreResourceNotFound" value="true"/ </bean <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" <property name="driverClassName" value="${jdbc.driver}"/ <property name="url" value="${jdbc.url}"/ <property name="username" value="${jdbc.username}"/ <property name="password" value="${jdbc.password}"/ </bean an example of the content of application.properties and filters-LOCAL.properties jdbc.driver=org.postgresql.Driver jdbc.url=jdbc:postgresql://localhost/shoutbox_dev jdbc.username=tester jdbc.password=tester Can I remove the propertyConfigurer from the applicationContext, create a PROD filter and disregard the application.properties file, or will that give me issues when deploying to the production server?

    Read the article

  • Most cost effective way to target multiple mobile platforms

    - by niidto
    Hi I have been given the tasks of speccing a mobile application, which will need to run on approx. 1000 devices. These devices already exist, and consist of iPhones, BlackBerrys, Androids, Windows Mobile and Netbooks. The application will have simple reporting capability, and a collection of forms. Anyway, the obvious solution would be to develop some browser based solution, although given the occasionally connected nature of the devices, there's a potential for data to get lost / not saved. So instead of creating a complex application for each platform, I was thinking we could build what is effectively a form generator, with basic offline storage capability (text files), designed to run on each device, and have the device generate a form, based on for example an XML file that it could request from a server somewhere, resulting in minimal specialist development costs, and the ability to run most of the logic from the server end, with the devices being dumb clients that render forms and upload the data when there is an available connection. Anyway, my question summarised is, how have you made the decision on supporting multiple devices for your application. Is this always an unavoidable problem, and you just have to make the call to support 1 or 2, or pay for developers to write code for each platform, or alternatively supply pre-installed devices to the company? Many thanks James

    Read the article

  • How to eliminate tearing from animation?

    - by MusiGenesis
    I'm running an animation in a WinForms app at 18.66666... frames per second (it's synced with music at 140 BPM, which is why the frame rate is weird). Each cel of the animation is pre-calculated, and the animation is driven by a high-resolution multimedia timer. The animation itself is smooth, but I am seeing a significant amount of "tearing", or artifacts that result from cels being caught partway through a screen refresh. When I take the set of cels rendered by my program and write them out to an AVI file, and then play the AVI file in Windows Media Player, I do not see any tearing at all. I assume that WMP plays the file smoothly because it uses DirectX (or something else) and is able to synchronize the rendering with the screen's refresh activity. It's not changing the frame rate, as the animation stays in sync with the audio. Is this why WMP is able to render the animation without tearing, or am I missing something? Is there any way I can use DirectX (or something else) in order to enable my program to be aware of where the current scan line is, and if so, is there any way I can use that information to eliminate tearing without actually using DirectX for displaying the cels? Or do I have to fully use DirectX for rendering in order to deal with this problem? Update: forgot a detail. My app renders each cell onto a PictureBox using Graphics.DrawImage. Is this significantly slower than using BitBlt, such that I might eliminate at least some of the tearing by using BitBlt?

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • How to deal with clients and iterations in Agile team?

    - by Ondrej Slinták
    This thread is a follow up to my previous one. It's in fact 2 questions, so I hope no one minds, as they are dependent on each other. We are starting a new project at work and we consider it as a great opportunity to try Agile techniques in action. We had a brainstorming about ideas we read in several books and articles, and came up with concept that would suit us the best: 2 weeks iteration, followed by call with clients who would choose what stuff they want to have in next iteration. I just have few more questions, which we couldn't figure out ourselves. What to do in the first iteration? What to, generally, do in the first few iterations if we start from the scratch? Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality? What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work? How to communicate with clients? Our initial thought it to set the process to something like this: Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication? Any thoughts are welcome! Thanks in advance.

    Read the article

  • Python Error-Checking Standard Practice

    - by chaindriver
    Hi, I have a question regarding error checking in Python. Let's say I have a function that takes a file path as an input: def myFunction(filepath): infile = open(filepath) #etc etc... One possible precondition would be that the file should exist. There are a few possible ways to check for this precondition, and I'm just wondering what's the best way to do it. i) Check with an if-statement: if not os.path.exists(filepath): raise IOException('File does not exist: %s' % filepath) This is the way that I would usually do it, though the same IOException would be raised by Python if the file does not exist, even if I don't raise it. ii) Use assert to check for the precondition: assert os.path.exists(filepath), 'File does not exist: %s' % filepath Using asserts seems to be the "standard" way of checking for pre/postconditions, so I am tempted to use these. However, it is possible that these asserts are turned off when the -o flag is used during execution, which means that this check might potentially be turned off and that seems risky. iii) Don't handle the precondition at all This is because if filepath does not exist, there will be an exception generated anyway and the exception message is detailed enough for user to know that the file does not exist I'm just wondering which of the above is the standard practice that I should use for my codes.

    Read the article

  • MySQL default value based on view

    - by Jake
    Basically I have a bunch of views based on a simple discriminator column (eg. CREATE VIEW tablename AS SELECT * FROM tablename WHERE discrcolumn = "discriminator value"). Upon inserting a new row into this view, it should insert "discriminator value" into discrcolumn. I tried this, but apparently MySQL doesn't figure this out itself, as it throws an error "Field of view viewname underlying table does not have a default value". The discriminator column is set to NOT NULL of course. How do I mend this? Perhaps a pre-insert trigger? UPDATE: Triggers won't work on views, see below comment. Would it work to create a trigger on the table which uses a variable, and set that variable at establishing the connection? For each connection the value of that variable would be the same, but it could differ from other connections. EDIT: This appears to work... Setup: CREATE TRIGGER insert_[tablename] BEFORE INSERT ON [tablename] FOR EACH ROW SET NEW.[discrcolumn] = @variable Runtime: SET @variable = [descrvalue]; INSERT INTO [viewname] ([columnlist]) VALUES ([values]);

    Read the article

  • How to check if two records have a self-referencing relation?

    - by Machine
    Consider the following schema with users and their collegues (friends): Users User: columns: user_id: name: user_id as userId type: integer(8) unsigned: 1 primary: true autoincrement: true first_name: name: first_name as firstName type: string(45) notnull: true last_name: name: last_name as lastName type: string(45) notnull: true email: type: string(45) notnull: true unique: true relations: Collegues: class: User local: invitor foreign: invitee refClass: CollegueStatus equal: true onDelete: CASCADE onUpdate: CASCADE Join table: CollegueStatus: columns: invitor: type: integer(8) unsigned: 1 primary: true invitee: type: integer(8) unsigned: 1 primary: true status: type: enum(8) values: [pending, accepted, denied] default: pending notnull: true Now, let's say I two records, one for the user making a HTTP request (the logged in user), and one record for a user he wants to send a message to. I want to check if these users are collegues. Questions: Does Doctrine have any pre-build functionality to check if two records with with self-relations are related? If not, how would you write a method to check this? Where would you put said method? (In the User-class, UserTable-class etc) I could probably do something like this: public function (User $user1, User $user2) { // Ensure we load collegues if $user1 was fetched with DQL that // doesn't load this relation $collegues = $user1->get('Collegues'); $areCollegues = false; foreach($collegues as $collegue) { if($collegue['userId'] === $user2['userId']) { $areCollegues = true; break; } } return $areCollegues; } But this looks a neither efficient nor pretty. I just feel that it should be solved already for self-referencing relations to be nice to use.

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >