Search Results

Search found 10921 results on 437 pages for 'latex environment'.

Page 377/437 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Why does Rails screw up timezones when I am editing a resource?

    - by DJTripleThreat
    Steps to produce this: prompt>rails test_app prompt>cd test_app prompt>script/generate scaffold date_test my_date:datetime prompt>rake db:migrate now edit your app/views/date_tests/edit.html.erb: <h1>Editing date_test</h1> <% form_for(@date_test) do |f| %> <%= f.error_messages %> <p> RIGHT!<br/> <%= text_field_tag @date_test, f.object.my_date %> </p> <p> WRONG!<br /> <%= f.text_field :my_date %> </p> <p> <%= f.submit 'Update' %> </p> <% end %> <%= link_to 'Show', @date_test %> | <%= link_to 'Back', date_tests_path %> now edit your config/environment.rb: #add this config.time_zone = 'Central Time (US & Canada)' This recreates the problem I am having in my actual app. The problem with my app is that I'm storing a date in a hidden field and rendering a "user friendly" version. Creating a resource works fine but as soon as I try to edit it the time changes (it adds the difference between my current time zone configuration and UTC). go to http://localhost:3000/date_tests/new and save the time then go to reedit it and you will have two different representations of the date/time one which will save incorrectly and the other that will.

    Read the article

  • Run arbitrary subprocesses on Windows and still terminate cleanly?

    - by Weeble
    I have an application A that I would like to be able to invoke arbitrary other processes as specified by a user in a configuration file. Batch script B is one such process a user would like to be invoked by A. B sets up some environment variables, shows some messages and invokes a compiler C to do some work. Does Windows provide a standard way for arbitrary processes to be terminated cleanly? Suppose A is run in a console and receives a CTRL+C. Can it pass this on to B and C? Suppose A runs in a window and the user tries to close the window, can it cancel B and C? TerminateProcess is an option, but not a very good one. If A uses TerminateProcess on B, C keeps running. This could cause nasty problems if C is long-running, since we might start another instance of C to operate on the same files while the first instance of C is still secretly at work. In addition, TerminateProcess doesn't result in a clean exit. GenerateConsoleCtrlEvent sounds nice, and might work when everything's running in a console, but the documentation says that you can only send CTRL+C to your own console, and so wouldn't help if A were running in a window. Is there any equivalent to SIGINT on Windows? I would love to find an article like this one: http://www.cons.org/cracauer/sigint.html for Windows.

    Read the article

  • Cannot install windows service

    - by Matthew Dalton
    I have created a very simple window service using visual studio 2010 and .Net 4.0. This service has no functionality added from the default windows service project, other than an installer has been added. If i run installutil.exe appName.exe on my dev box or other windows 2008 R2 machines in our domain the windows service installs without issue. When i try to do this same thing on our customer site, it fails to install with the following error. Microsoft (R) .NET Framework Installation utility Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. Exception occurred while initializing the installation: System.IO.FileLoadException: Could not load file or assembly 'file:///C:\TestService\WindowsService1.exe' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515). This solution has only 1 project and no dependencies added. I have tried it on multiple machines in our environment and two in our customers. The machines are all windows 2008 R2, both fresh installs. One machine has just .net 2.0 and .net 4.0. The other .net 2, 3, 3.5 and 4. I am a local admin on each of the machines. I have also tried the 64bit installer but get the following error, so i think the 32 bit one is the one to use. System.BadImageFormatException Any guidance would be appreciated. Thanks.

    Read the article

  • How is external memory, internal memory, and cache organized?

    - by goldenmean
    Consider a system as follows:= A hardware board having say ARM Cortex-A8 and Neon Vector coprocessor, and Embedded Linux OS running on Cortex-A8. On this environment, if there is some application - say, a video decoder is executing - then: How is it decided that which buffers would be in external memory, which ones would be allocated in internal SRAM, etc. When one says calloc/malloc on such system/code, the pointer returned is from which memory: internal or external? Can a user make buffers to be allocated to the memories of his choice (internal/external)? In ARM architectures, there is another memory called as Tightly coupled memory (TCM). What is that and how can user enable and use it? Can I declare buffers in this memory? Do I need to see the memory map (if any) of the hardware board to understand about all these different physical memories present in a typical hardware board? How much of a role does the OS play in distinguishing these different memories? Sorry for multiple questions, but i think they all are interlinked.

    Read the article

  • Protecting critical sections based on a condition in C#

    - by NAADEV
    Hello, I'm dealing with a courious scenario. I'm using EntityFramework to save (insert/update) into a SQL database in a multithreaded environment. The problem is i need to access database to see whether a register with a particular key has been already created in order to set a field value (executing) or it's new to set a different value (pending). Those registers are identified by a unique guid. I've solved this problem by setting a lock since i do know entity will not be present in any other process, in other words, i will not have same guid in different processes and it seems to be working fine. It looks something like that: static readonly object LockableObject = new object(); static void SaveElement(Entity e) { lock(LockableObject) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } But this implies when i have a huge ammount of requests to be saved, they will be queued. I wonder if there is something like that (please, take it just as an idea): static void SaveElement(Entity e) { (using ThisWouldBeAClassToProtectBasedOnACondition protector = new ThisWouldBeAClassToProtectBasedOnACondition(e => e.UniqueId) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } The idea would be having a kind of protection that protected based on a condition so each entity e would have its own lock based on e.UniqueId property. Any idea?

    Read the article

  • (JBoss) Problem with .war project on production, while test works

    - by ikky
    Hello. I have a java project (using Spring mvc) which i have built and deployed on my local computer. It runs on a JBoss application server, and works fine on the local machine. The next step i do, is to copy the deployed project.war from the local machine to the server which has the same development environment as the local machine. I stop the JBoss server, delete the cache, and run the JBoss server again. When i now try to run one of the pages(xx.xxx.xxx:8080/webservice/test.htm), i get this exception: exception org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.NullPointerException org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:583) root cause java.lang.NullPointerException com.project.db.DBCustomer.isCredentialsCorrect(DBCustomer.java:44) com.project.CreateHController.handleRequest(CreateHController.java:60) org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:501) javax.servlet.http.HttpServlet.service(HttpServlet.java:697) javax.servlet.http.HttpServlet.service(HttpServlet.java:810) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:75) It seems like none of my classes are reachable. Does anyone have any idea of what is wrong? btw: as i said, the project works fine on the local machine.

    Read the article

  • Silverlight Business Application template with WCF is throwing warning.

    - by Manoj
    Hi, I am using the Silvelight Business Application template. I wrote a function which uses Membership.getUserList function to return the user list. I tried exposing it as Service using WCF. But when I try to compile the client side code it throws a warning saying "Client Proxy Generation for user_authentication.Web.Service1 failed'. Why does it happen? The complete warning message is: Warning 4 Client proxy generation for service 'user_authentication.Web.Service1' failed: Generating metadata files... Warning: Unable to load a service with configName 'user_authentication.Web.Service1'. To export a service provide both the assembly containing the service type and an executable with configuration for this service. Details:Either none of the assemblies passed were executables with configuration files or none of the configuration files contained services with the config name 'user_authentication.Web.Service1'. Warning: No metadata files were generated. No service contracts were exported. To export a service, use the /serviceName option. To export data contracts, specify the /dataContractOnly option. This can sometimes occur in certain security contexts, such as when the assembly is loaded over a UNC network file share. If this is the case, try copying the assembly into a trusted environment and running it.

    Read the article

  • C# app running as either Windows Form or as Console Application

    - by Aeolien
    I am looking to have one of my Windows Forms applications be run programmatically—from the command line. In preparation, I have separated the logic in its own class from the Form. Now I am stuck trying to get the application to switch back and forth based on the presence of command line arguments. Here is the code for the main class: static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { string[] args = Environment.GetCommandLineArgs(); if (args.Length > 1) // gets passed its path, by default { CommandLineWork(args); return; } Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } private static void CommandLineWork(string[] args) { Console.WriteLine("It works!"); Console.ReadLine(); } where Form1 is my form and the It works! string is just a placeholder for the actual logic. Right now, when running this from within Visual Studio (with command line arguments), the phrase It works! is printed to the Output. However, when running the /bin/Debug/Program.exe file (or /Release for that matter) the application crashes. Am I going about this the right way? Would it make more sense (i.e. take less developer time) to have my logic class be a DLL that gets loaded by two separate applications? Or is there something entirely different that I'm not aware of? Thanks in advance!

    Read the article

  • Finding the width of a directed acyclic graph... with only the ability to find parents

    - by Platinum Azure
    Hi guys, I'm trying to find the width of a directed acyclic graph... as represented by an arbitrarily ordered list of nodes, without even an adjacency list. The graph/list is for a parallel GNU Make-like workflow manager that uses files as its criteria for execution order. Each node has a list of source files and target files. We have a hash table in place so that, given a file name, the node which produces it can be determined. In this way, we can figure out a node's parents by examining the nodes which generate each of its source files using this table. That is the ONLY ability I have at this point, without changing the code severely. The code has been in public use for a while, and the last thing we want to do is to change the structure significantly and have a bad release. And no, we don't have time to test rigorously (I am in an academic environment). Ideally we're hoping we can do this without doing anything more dangerous than adding fields to the node. I'll be posting a community-wiki answer outlining my current approach and its flaws. If anyone wants to edit that, or use it as a starting point, feel free. If there's anything I can do to clarify things, I can answer questions or post code if needed. Thanks! EDIT: For anyone who cares, this will be in C. Yes, I know my pseudocode is in some horribly botched Python look-alike. I'm sort of hoping the language doesn't really matter.

    Read the article

  • Call .NET Webservice with Android

    - by Lasse P
    Hi, I know this question has been asked here before, but I don't think those answers were adequate for my needs. We have a SOAP webservice that is used for an iPhone application, but it is possible that we need an Android specific version or a proxy of the service, so we have the option to go with either SOAP or JSON. I have a few concerns about both methods: SOAP solution: Is it possible to generate java source code from a WSDL file, if so, will it include some kind of proxy class to invoke the webservice and will it work in the Android environment at all? Google has not provided any SOAP library in Android, so i need to use 3rd party, any suggestion? What about the performance/overhead with parsing and transmitting SOAP xml over the wire versus the JSON solution? JSON solution: There is a few classes in the Android sdk that will let me parse JSON, but does it support generic parsing, like if I want the result to be parsed as a complex type? Or would I need to implement that myself? I have read about 2 libraries before here on Stackoverflow, GSON an Jackson. What is the difference performance and usability (from a developers perspective) wise? Do you guys have any experince with either of those libraries? So i guess the big question is, what method to go with? I hope you can help me out. Thanks in advance :-)

    Read the article

  • Fatal error with Custom Magento Module on one server but not the other

    - by Jack
    Hi, I am creating my own custom module in Magento and during testing on a Litespeed server (PHP v5.2.14) I am getting a Fatal Error: Call to a member function batch() on a non-object in ../../../BatchController.php on line 25 that was not appearing during testing on another linux server and a wamp server (PHP v5.2.11). This one has stumped me. I am guessing it has something to do with the server configuration rather than the code itself. But i am just guessing. I was hoping someone here could tell me. The only real major difference I could see, aside from the php versions and environment, is that the server that the error is on is using the Suhosin Patch. But would that be something that could cause this? The line in question is Mage::getModel('mymodule/mymodel')->batch(); which is enclosed in an IF statement. batch() is a public function located in my model file. If you need more code let me know. Thanks!

    Read the article

  • invalid scalar hex value 0x8000000 and over

    - by kioto
    Hi. I found a problem getting hex value from yaml file. It couldn't get hex value 0x80000000 and over. Following is a sample C++ program. // ymlparser.cpp #include <iostream> #include <fstream> #include "yaml-cpp/yaml.h" int main(void) { try { std::ifstream fin("hex.yaml"); YAML::Parser parser(fin); YAML::Node doc; parser.GetNextDocument(doc); int num1; doc["hex1"] >> num1; printf("num1 = 0x%x\n", num1); int num2; doc["hex2"] >> num2; printf("num2 = 0x%x\n", num2); return 0; } catch(YAML::ParserException& e) { std::cout << e.what() << "\n"; } } hex.yaml hex1: 0x7FFFFFFF hex2: 0x80000000 Error message is here. $ ./ymlparser num1 = 0x7fffffff terminate called after throwing an instance of 'YAML::InvalidScalar' what(): yaml-cpp: error at line 2, column 7: invalid scalar Aborted Environment yaml-cpp : getting from svn, March.22.2010 or v0.2.5 OS : Ubuntu 9.10 i386 I need to get hex the value on yaml-cpp now, but I have no idea. Please tell me how to get it another way. Thanks,

    Read the article

  • Calling a WPF Application and modify exposed properties?

    - by Justin
    I have a WPF Keyboard Application, it is developed in such a way that an application could call it and modify its properties to adapt the Keyboard to do what it needs to. Right now I have a file *.Keys.Set which tells the application (on open) to style itself according to that new style. I know this file could be passed as a command line argument into the application. That would not be a problem. My concern is, is there a way via a managed environment to change the properties of the executable as long as they are exposed properly, an example: 'Creates a new instance of the Keyboard Application Dim e_key as new WpfApplication("C:\egt\components\keyboard.exe") 'Sets the style path e_key.SetStylePath("c:\users\joe\apps\me\default.keys.set") e_key.Refresh() 'Applies the style e_key.HideMenu() 'Hides the menu e_key.ShowDeck("PIN") 'Shows the custom "deck" of keyboard keys the developer 'Created in the style application. ''work with events and response 'Clear the instance from memory e_key.close e_key.dispose e_key = nothing This would allow my application to become easily accessible to other Touch Screen Application Developers, allowing them to use my keyboard and keep the functionality they need. It seems like it might be possible because (name of executable).application shows all the exposed functions, properties, and values. I just have never done this before. Any help would be appreciated, thank you in advance.

    Read the article

  • SQL putting two single quotes around datetime fields and fails to insert record

    - by user82613
    I am trying to INSERT into an SQL database table, but it doesn't work. So I used the SQL server profiler to see how it was building the query; what it shows is the following: declare @p1 int set @p1=0 declare @p2 int set @p2=0 declare @p3 int set @p3=1 exec InsertProcedureName @ConsumerMovingDetailID=@p1 output, @UniqueID=@p2 output, @ServiceID=@p3 output, @ProjectID=N'0', @IPAddress=N'66.229.112.168', @FirstName=N'Mike', @LastName=N'P', @Email=N'[email protected]', @PhoneNumber=N'(254)637-1256', @MobilePhone=NULL, @CurrentAddress=N'', @FromZip=N'10005', @MoveInAddress=N'', @ToZip=N'33067', @MovingSize=N'1', @MovingDate=''2009-04-30 00:00:00:000'', /* Problem here ^^^ */ @IsMovingVehicle=0, @IsPackingRequired=0, @IncludeInSaveologyPlanner=1 select @p1, @p2, @p3 As you can see, it puts a double quote two pairs of single quotes around the datetime fields, so that it produces a syntax error in SQL. I wonder if there is anything I must configure somewhere? Any help would be appreciated. Here is the environment details: Visual Studio 2008 .NET 3.5 MS SQL Server 2005 Here is the .NET code I'm using.... //call procedure for results strStoredProcedureName = "usp_SMMoverSearchResult_SELECT"; Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand(strStoredProcedureName); dbCommand.CommandTimeout = DataHelper.CONNECTION_TIMEOUT; database.AddInParameter(dbCommand, "@MovingDetailID", DbType.String, objPropConsumer.ConsumerMovingDetailID); database.AddInParameter(dbCommand, "@FromZip", DbType.String, objPropConsumer.FromZipCode); database.AddInParameter(dbCommand, "@ToZip", DbType.String, objPropConsumer.ToZipCode); database.AddInParameter(dbCommand, "@MovingDate", DbType.DateTime, objPropConsumer.MoveDate); database.AddInParameter(dbCommand, "@PLServiceID", DbType.Int32, objPropConsumer.ServiceID); database.AddInParameter(dbCommand, "@FromAreaCode", DbType.String, pFromAreaCode); database.AddInParameter(dbCommand, "@FromState", DbType.String, pFromState); database.AddInParameter(dbCommand, "@ToAreaCode", DbType.String, pToAreaCode); database.AddInParameter(dbCommand, "@ToState", DbType.String, pToState); DataSet dstSearchResult = new DataSet("MoverSearchResult"); database.LoadDataSet(dbCommand, dstSearchResult, new string[] { "MoverSearchResult" });

    Read the article

  • How to make use of Grails Dependencies in your IDE

    - by raoulsson
    Hi All, So I finally got my dependencies working with Grails. Now, how can my IDE, eg IntelliJ or Eclipse, take advantage of it? Or do I really have to manually manage what classes my IDE knows about at "development time"? If the BuildConfig.groovy script is setup right (see here), you will be able to code away with vi or your favorite editor without any troubles, then run grails compile which will resolve and download the dependencies into the Ivy cache and off you go... If, however, you are using an IDE like Eclipse or IntelliJ, you will need the dependencies at hand while coding. Obviously - as these animals will need them for the "real time" error detection/compilation process. Now, while it is certainly possible to code with all the classes shining up in bright red all over the place that are unknown to your IDE, it is certainly not much fun... The Maven support or whatever it is officially called lives happily with the pom file, no extra "jar directory" pointers needed, at least in IntelliJ. I would like to be able to do the same with Grails dependencies. Currently I am defining them in the BuildConfig.groovy and additionally I copy/paste the current jars around on my local disk and let the IDE point to it. Not very satisfactory, as I am working in a highly volatile project module environment with respect to code change. And this situation ports me directly into "jar hell", as my "develop- and build-dependencies" easily get out of sync and I have to manage manually, that is, with my brain... And my brain should be busy with other stuff... Thanks! Raoul P.S: I'm currently using Grails 1.2M4 and IntelliJ 92.105. But feel free to add answers on future versions of Grails and different, future IDEs, as the come in...

    Read the article

  • Will a Ph. D. in Computer Science help?

    - by Francisco Garcia
    I am close to my 30s and still learning about programming and software engineering. Like most people who like their profession I truly believe that I should aim to improve and keep updated. One of the things I do is reading technical papers from professional publications (IEEE and ACM) but I admit there are very good bloggers out there too. Lately I started to think (should I say realize?) that Ph. D people actually are expected to expand constantly their knowledge, but little is expected from lower classes once they know enough This made me think that maybe having a Ph. D will help to have more... respect? but I also believe that I am already getting old for that. Futhermore I see many master and doctor programs that does not seem to add any value over hard experience and self learning. I belive that a degree in computer science, althought not necessary, can lay out a good base for programming work. However: What can a Ph. D. degree give you that you cannot learn on your own? (if you are not into something VERY specific and want to work in a non academic environment)

    Read the article

  • How do you use stl's functions like for_each?

    - by thomas-gies
    I started using stl containers because they came in very handy when I needed functionality of a list, set and map and had nothing else available in my programming environment. I did not care much about the ideas behind it. STL documentations were only interesting up to the point where it came to functions, etc. Then I skipped reading and just used the containers. But yesterday, still being relaxed from my holidays, I just gave it a try and wanted to go a bit more the stl way. So I used the transform function (can I have a little bit of applause for me, thank you). From an academic point of view it really looked interesting and it worked. But the thing that boroughs me is that if you intensify the use of those functions, you need 10ks of helper classes for mostly everything you want to do in your code. The hole logic of the program is sliced in tiny pieces. This slicing is not the result of god coding habits. It's just a technical need. Something, that makes my life probably harder not easier. And I learned the hard way, that you should always choose the simplest approach that solves the problem at hand. And I can't see what, for example, the for_each function is doing for me that justifies the use of a helper class over several simple lines of code that sit inside a normal loop so that everybody can see what is going on. I would like to know, what you are thinking about my concerns? Did you see it like I do when you started working this way and have changed your mind when you got used to it? Are there benefits that I overlooked? Or do you just ignore this stuff as I did (and will go an doing it, probably). Thanks. PS: I know that there is a real for_each loop in boost. But I ignore it here since it is just a convenient way for my usual loops with iterators I guess.

    Read the article

  • Google AppEngine + Local JUnit Tests + Jersey framework + Embedded Jetty

    - by xamde
    I use Google Appengine for Java (GAE/J). On top, I use the Jersey REST-framework. Now i want to run local JUnit tests. The test sets up the local GAE development environment ( http://code.google.com/appengine/docs/java/tools/localunittesting.html ), launches an embedded Jetty server, and then fires requests to the server via HTTP and checks responses. Unfortunately, the Jersey/Jetty combo spawns new threads. GAE expects only one thread to run. In the end, I end up having either no datstore inside the Jersey-resources or multiple, having different datastore. As a workaround I initialise the GAE local env only once, put it in a static variable and inside the GAE resource I add many checks (This threads has no dev env? Re-use the static one). And these checks should of course only run inside JUnit tests.. (which I asked before: "How can I find out if code is running inside a JUnit test or not?" - I'm not allowed to post the link directly here :-|)

    Read the article

  • Start developing a database application using Oracle + Net Beans

    - by Ranhiru
    I have thought of creating my first database application for one of my projects using Oracle and Java. I have chosen Netbeans as my development environment. I have a few questions to getting started. Please bare with me as I'm a complete beginner to Oracle + Netbeans This will be a data intensive (yet still for a college project) database application. I do not need 1000 user concurrency or any other very advanced features but basic stuff such as triggers, stored procedures etc. Will the 11g "Express" (XE) suffice for my requirements? Do i need any Java to Oracle bridge (database connectivity driver eg. ODBC etc) for Netbeans to connect to the oracle database? If yes, what are they? Does Netbeans support Oracle databases natively? Any easy to follow guide on how do i connect to the database and insert/retrieve/display data on a J2SE application? (I know that i should Google this but if there's any guide previously followed by anyone and is considered easy, it would be greatly appreciated.)

    Read the article

  • Rails log shows unexpected data as to the time spent on a DB stuff

    - by Arhimed
    I'm running on WinXP + Ruby 1.8.6 + Rails 2.3.5 (frozen to the project) in development environment. Looking at development.log I observe inconsistent data as to the time spent on a database stuff. Example #1 (good): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 12:15:54) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (563.0ms) SHOW FIELDS FROM `cities` City Load (15.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 953ms (DB: 578) | 302 Found [http://xyz/] Example #2 (unexpected): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 12:15:36) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (0.0ms) SHOW FIELDS FROM `cities` City Load (0.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 47ms (DB: 32) | 302 Found [http://xyz/] Example #2 shows 32ms were spent on DB while there were just 2 sql querries and both of zero time spent. Example #3 (unexpected): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 11:21:24) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (63.0ms) SHOW FIELDS FROM `cities` City Load (62.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 1187ms (DB: 297) | 302 Found [http://xyz/] Example #3 shows 297ms while there were querries of 63ms and 62ms (125ms in total). Can't understand it. Could someone explain? Thanks in advance.

    Read the article

  • Can I architect a web app so it can be deployed to either the cloud or a dedicated server / VPS ? Ho

    - by CAD bloke
    Is there are an architecture versatile enough that it may be deployed to either a cloud server or to a dedicated (or VPS) server with minimal change? Obviously there would be config changes but I'd rather leave the rest of the app consistent, keeping one maintainable codebase. The app would be ASP.NET &/or ASP.MVC. My dev environment is VS 2010. The cloud may, or may not be, Azure. Dedicated or VPS would be Win Server 2008. Probably. It is not a public-facing web site. The web app I have in mind would be a separate deployment for each client. Some clients would be small-scale, some will prefer the app to run on a local intranet rather than on the web. Other clients may prefer the cloud approach for a black-box solution. The app may run for a few hours or it may run indefinitely, it depends on the client and the project. Other than deployment scenarios the apps would be more or less identical. As you may see from the tags, I'm assuming a message-based architecture is probably the most versatile but I'm also used to being wrong about this stuff. All suggestions and pointers welcome regarding general architectures and also specific solutions.

    Read the article

  • JavaScript window object element properties

    - by Timothy
    A coworker showed me the following code and asked me why it worked. <span id="myspan">Do you like my hat?</span> <script type="text/javascript"> var spanElement = document.getElementById("myspan"); alert("Here I am! " + spanElement.innerHTML + "\n" + myspan.innerHTML); </script> I explained that a property is attached to the window object with the name of the element's id when the browser parses the document which then contains a reference to the appropriate dom node. It's sort of as if window.myspan = document.getElementById("myspan") is called behind the scenes as the page is being rendered. The ensuing discussion we had raised a few of questions: The window object and most of the DOM are not part of the official JavaScript/ECMA standards, but is the above behavior documented in any other official literature, perhaps browser-related? The above works in a browser (at least the main contenders) because there is a window object, but fails in something like rhino. Is writing code that relys on this considered bad practice because it makes too many assumptions about the execution environment? Are there any browsers in which the above would fail, or is this considered standard behavior across the board? Does anyone here know the answers to those questions and would be willing to enlighten me? I tried a quick internet search, but I admit I'm not sure how to even properly phrase the query. Pointers to references and documentation are welcome.

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Using capistrano to deploy from different git branches

    - by Toms Mikoss
    I am using capistrano to deploy a RoR application. The codebase is in a git repository, and branching is widely used in development. Capistrano uses deploy.rb file for it's settings, one of them being the branch to deploy from. My problem is this: let's say I create a new branch A from master. The deploy file will reference master branch. I edit that, so A can be deployed to test environment. I finish working on the feature, and merge branch A into master. Since the deploy.rb file from A is fresher, it gets merged in and now the deploy.rb in master branch references A. Time to edit again. That's a lot of seemingly unnecessary manual editing - the parameter should always match current branch name. On top of that, it is easy to forget to edit the settings each and every time. What would be the best way to automate this process? Edit: Turns out someone already had done exactly what I needed.

    Read the article

  • problem finding a header with a c++ makefile

    - by Max
    Hi. I've started working with my first makefile. I'm writing a roguelike in C++ using the libtcod library, and have the following hello world program to test if my environment's up and running: #include "libtcod.hpp" int main() { TCODConsole::initRoot(80, 50, "PartyHack"); TCODConsole::root->printCenter(40, 25, TCOD_BKGND_NONE, "Hello World"); TCODConsole::flush(); TCODConsole::waitForKeypress(true); } My project directory structure looks like this: /CppPartyHack ----/libtcod-1.5.1 # this is the libtcod root folder --------/include ------------libtcod.hpp ----/PartyHack --------makefile --------partyhack.cpp # the above code (while we're here, how do I do proper indentation? Using those dashes is silly.) and here's my makefile: SRCDIR = . INCDIR = ../libtcod-1.5.1/include CFLAGS = $(FLAGS) -I$(INCDIR) -I$(SRCDIR) -Wall CC = gcc CPP = g++ .SUFFIXES: .o .h .c .hpp .cpp $(TEMP)/%.o : $(SRCDIR)/%.cpp $(CPP) $(CFLAGS) -o $@ -c $< $(TEMP)/%.o : $(SRCDIR)/%.c $(CC) $(CFLAGS) -o $@ -c $< CPP_OBJS = $(TEMP)partyhack.o all : partyhack partyhack : $(CPP_OBJS) $(CPP) $(CPP_OBJS) -o $@ -L../libtcod-1.5.1 -ltcod -ltcod++ -Wl,-rpath,. clean : \rm -f $(CPP_OBJS) partyhack I'm using Ubuntu, and my terminal gives me the following errors: max@max-desktop:~/Desktop/Development/CppPartyhack/PartyHack$ make g++ -c -o partyhack.o partyhack.cpp partyhack.cpp:1:23: error: libtcod.hpp: No such file or directory partyhack.cpp: In function ‘int main()’: partyhack.cpp:5: error: ‘TCODConsole’ has not been declared partyhack.cpp:6: error: ‘TCODConsole’ has not been declared partyhack.cpp:6: error: ‘TCOD_BKGND_NONE’ was not declared in this scope partyhack.cpp:7: error: ‘TCODConsole’ has not been declared partyhack.cpp:8: error: ‘TCODConsole’ has not been declared make: * [partyhack.o] Error 1 So obviously, the makefile can't find libtcod.hpp. I've double checked and I'm sure the relative path to libtcod.hpp in INCDIR is correct, but as I'm just starting out with makefiles, I'm uncertain what else could be wrong. My makefile is based off a template that the libtcod designers provided along with the library itself, and while I've looked at a few online makefile tutorials, the code in this makefile is a good bit more complicated than any of the examples the tutorials showed, so I'm assuming I screwed up something basic in the conversion. Thanks for any help.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >