Search Results

Search found 3071 results on 123 pages for 'executing'.

Page 98/123 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Google AJAX Transliteration API :- How do i translate many elements in page to some language at one

    - by Nitesh Panchal
    Hello, I have many elements on page and all of which i want to translate to some language. The language is not the same for all fields, that is, for 1st field it may be fr and for third field it may be en then again for 7th field it may be pa. Basically i wrote the code and it's working :- <script type="text/javascript"> //<![CDATA[ google.load("language", "1"); window.onload = function(){ var elemPostTitles = document.getElementsByTagName("h4"); var flag = true; for(var i = 0 ; i < elemPostTitles.length ; i++){ while(flag == false){ } var postTitleElem = elemPostTitles[i]; var postContentElem = document.getElementById("postContent_" + i); var postTitle = postTitleElem.innerHTML; var postContent = postContentElem.innerHTML; var languageCode = document.getElementById("languageCode_" + i).value; google.language.detect(postTitle, function(result) { if (!result.error && result.language) { google.language.translate(postTitle, result.language, languageCode, function(result) { flag = true; if (result.translation) { postTitleElem.innerHTML = result.translation; } }); } }); flag = false; } As you can see, what i am trying to do is restrict the loop from traversing until the result of previous ajax call is receieved. If i don't do this only the last field gets translated. My code works nicely, but because of the infinite loop, i keep getting errors from Mozilla to "stop executing scripts". How do i get rid of this? Also, is my approach correct? Or some inbuilt function is available which can ease my task? Thanks in advance :)

    Read the article

  • python can't start a new thread

    - by Giorgos Komnino
    I am building a multi threading application. I have setup a threadPool. [ A Queue of size N and N Workers that get data from the queue] When all tasks are done I use tasks.join() where tasks is the queue . The application seems to run smoothly until suddently at some point (after 20 minutes in example) it terminates with the error thread.error: can't start new thread Any ideas? Edit: The threads are daemon Threads and the code is like: while True: t0 = time.time() keyword_statuses = DBSession.query(KeywordStatus).filter(KeywordStatus.status==0).options(joinedload(KeywordStatus.keyword)).with_lockmode("update").limit(100) if keyword_statuses.count() == 0: DBSession.commit() break for kw_status in keyword_statuses: kw_status.status = 1 DBSession.commit() t0 = time.time() w = SWorker(threads_no=32, network_server='http://192.168.1.242:8180/', keywords=keyword_statuses, cities=cities, saver=MySqlRawSave(DBSession), loglevel='debug') w.work() print 'finished' When the daemon threads are killed? When the application finishes or when the work() finishes? Look at the thread pool and the worker (it's from a recipe ) from Queue import Queue from threading import Thread, Event, current_thread import time event = Event() class Worker(Thread): """Thread executing tasks from a given tasks queue""" def __init__(self, tasks): Thread.__init__(self) self.tasks = tasks self.daemon = True self.start() def run(self): '''Start processing tasks from the queue''' while True: event.wait() #time.sleep(0.1) try: func, args, callback = self.tasks.get() except Exception, e: print str(e) return else: if callback is None: func(args) else: callback(func(args)) self.tasks.task_done() class ThreadPool: """Pool of threads consuming tasks from a queue""" def __init__(self, num_threads): self.tasks = Queue(num_threads) for _ in range(num_threads): Worker(self.tasks) def add_task(self, func, args=None, callback=None): ''''Add a task to the queue''' self.tasks.put((func, args, callback)) def wait_completion(self): '''Wait for completion of all the tasks in the queue''' self.tasks.join() def broadcast_block_event(self): '''blocks running threads''' event.clear() def broadcast_unblock_event(self): '''unblocks running threads''' event.set() def get_event(self): '''returns the event object''' return event

    Read the article

  • Unable to step into interface implementation configured by unity application block

    - by Rahul
    I have configured a set of interface implementations with EntLib. unity block. The constructor of implementation classes work fine as soon as I run the application: 1. The interface to implement when I run the application the cctor runs fine, which shows that unity resolution was successful: But when I try to call a method of this class, the code just passes through without actually invoking the function of the implemented class: Edit: Added on June 11, 2012 Following is the Unity Configuration I have. (This is all the unity configuration I am doing) public class UnityControllerFactory : DefaultControllerFactory { private static readonly IUnityContainer container; private static UnityControllerFactory factory = null; static UnityControllerFactory() { container = new UnityContainer(); UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Configure(container); factory = new UnityControllerFactory(); } public static UnityControllerFactory GetControllerFactory() { return factory; } protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { return container.Resolve(controllerType) as IController; } } I am unable to step into this code and the implementation simply skips out without executing anything. What is wrong here?

    Read the article

  • Getting wierd issue with TO_NUMBER function in Oracle

    - by Fazal
    I have been getting an intermittent issue when executing to_number function in the where clause on a varchar2 column if number of records exceed a certain number n. I used n as there is no exact number of records on which it happens. On one DB it happens after n was 1 million on another when it was 0.1. million. E.g. I have a table with 10 million records say Table Country which has field1 varchar2 containing numberic data and Id If I do a query as an example select * from country where to_number(field1) = 23 and id 1 and id < 100000 This works But if i do the query select * from country where to_number(field1) = 23 and id 1 and id < 100001 It fails saying invalid number Next I try the query select * from country where to_number(field1) = 23 and id 2 and id < 100001 It works again As I only got invalid number it was confusing, but in the log file it said Memory Notification: Library Cache Object loaded into SGA Heap size 3823K exceeds notification threshold (2048K) KGL object name :with sqlplan as ( select c006 object_owner, c007 object_type,c008 object_name from htmldb_collections where COLLECTION_NAME='HTMLDB_QUERY_PLAN' and c007 in ('TABLE','INDEX','MATERIALIZED VIEW','INDEX (UNIQUE)')), ws_schemas as( select schema from wwv_flow_company_schemas where security_group_id = :flow_security_group_id), t as( select s.object_owner table_owner,s.object_name table_name, d.OBJECT_ID from sqlplan s,sys.dba_objects d It seems its related to SGA size, but google did not give me much help on this. Does anyone have any idea about this issue with TO_NUMBER or oracle functions for large data?

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

  • How to return a value from facebook javascript connect api fucntions

    - by dezwald
    I am trying to create wrapper functions on facebook javascript connect api methods. My problem is that i can not return a value within the facebook api FB_RequireFeatures method. i want my isFBConnected() function to return true or false based on if i'm logged into facebook or not. What is happening is that when i return true it returns it to the child function, which makes sense however, my global "login" variable does not get set to true. I've tried setting a timeout to wait until the facebook connect finishes executing and still no luck. any help or other solutions are welcome! my isFBConnected wrapper function is stated below: function isFBConnected(){ var api_key = '<?=$this->apiKey?>'; var channel_path = '<?=$this->xdReceiver?>'; var host_url = '<?=$this->hostUrl?>'; var servicePathShort = '<?=$this->servicePathShort?>'; var login = false; FB_RequireFeatures(["Api"], function(){ // Create an ApiClient object, passing app's API key and // a site relative URL to xd_receiver.htm FB.Facebook.init(api_key, channel_path); var api = FB.Facebook.apiClient; // If FB user session exists - load stats data if(api.get_session()!=null){ if(api.get_session().uid!='' && api.get_session().uid!=undefined){ login = true; alert(api.get_session().uid); return true; } } login = false; return false; }); return false; }

    Read the article

  • Guid Primary /Foreign Key dilemma SQL Server

    - by Xience
    Hi guys, I am faced with the dilemma of changing my primary keys from int identities to Guid. I'll put my problem straight up. It's a typical Retail management app, with POS and back office functionality. Has about 100 tables. The database synchronizes with other databases and receives/ sends new data. Most tables don't have frequent inserts, updates or select statements executing on them. However, some do have frequent inserts and selects on them, eg. products and orders tables. Some tables have upto 4 foreign keys in them. If i changed my primary keys from 'int' to 'Guid', would there be a performance issue when inserting or querying data from tables that have many foreign keys. I know people have said that indexes will be fragmented and 16 bytes is an issue. Space wouldn't be an issue in my case and apparently index fragmentation can also be taken care of using 'NEWSEQUENTIALID()' function. Can someone tell me, from there experience, if Guid will be problematic in tables with many foreign keys. I'll be much appreciative of your thoughts on it...

    Read the article

  • RSKeyMgmt -r unable to remove installtion ID

    - by Eves
    I have backed up databases on one 2005 SQL Server and have restored those databases on a second 2005 SQL Server. I am currently trying to remove the new server's OLD key instance ID using the Reporting Services Key Manager (RSKeyMgmt -r). Prior to running the removal command the list of the current instances shows the new server's OLD instance ID as well as the NEW instance ID from the first SQL Server. Executing the RSKeyMgmt -r command results in: "The command completed successfully". However, when I recheck the listing of current instance IDs I see both the OLD and NEW instance IDs. In addition, when I check the Application Event Viewer I see an error: Report Server Windows Service (MSSQLSERVER) has not been granted access to the catalog content Does anyone know why I would be getting the above application error? Or...does anyone know what I would need to do to give access to the catalog to the Report Server Window Service? The first SQL Server where the databases were backed up is an Enterprise edition SQL Server and the second SQL Server where the databases were restored is Standard edition. Could this be the cause of the problem? Is there a way to make this backup and restore migration work?

    Read the article

  • Easy way to lock a file on a remote machine (windows)?

    - by roufamatic
    I've tracked down an error in my logs, and am trying to reproduce it. My theory is that a file sometimes gets locked in a specific folder, and when the application (ASP.NET) tries to delete that folder it hangs. I don't have the application running on my own machine so I'm debugging this on a remote server. But for the life of me, I can't seem to figure out a way to lock a file that prevents it from being deleted by the process. My first thought was to map the network path to a local drive and just leave a command prompt open to that folder. Locally that always fouls up my folder deletes, but apparently SMB is a bit more robust and doesn't grant me a lock. After that I created an infinte loop vbscript in the folder and executed it remotely. The file was deleted out from underneath the executing code. Man! I then tried creating a file on the server in that folder and removing all permissions. That didn't do the trick. I don't have access to the IIS settings so perhaps it's running under a privileged system account. So: what's a program that you know is free and I can quickly use to create an exclusive lock on a file so I can test my delete theory? Like a really, really bad Notepad clone or something. :-)

    Read the article

  • Where in maven project's path should I put configuration files that are not considered resources

    - by Paralife
    I have a simple java maven project. One of my classes when executing needs to load an xml configuration file from the classpath. I dont want to package such xml file when producing the jar but I want to include a default xml file in a zip assemply under a conf subfolder and I also want this default xml to be available in the unit tests to test against it. As I see it there are 2 possible places of this default xml: src/main/resources/conf/default.xml src/main/conf/default.xml Both solutions demand special pom actions: In solution 1, I get the auto copy to target folder during build which means it is available in testing but I also get it in the produced jar which i dont want. In solution 2, I get the jar as I want it(free of the xml) but I manually have to copy the xml to the target folder to be available for testing. (I dont want to add src's subfolders in test classpath. I think it is bad practice). The question is what is the best solution of the two? If the correct is 2, what is the best way to copy it to target folder? Is there any other solution better and more common than those two? (I also read http://stackoverflow.com/questions/465001/where-should-i-put-application-configuration-files-for-a-maven-project but I would like to know the most "correct solution" from the "convention over configuration" point of view and this link provides some configuration type solutions but not any convention oriented. Maybe there isnt one but I ask anyway. Also the solutions provided include AntRun plugin and appAssembler plugin and I wonder if I could do it with out them.)

    Read the article

  • Tools to document/visualize call graph?

    - by Dave Griffiths
    Hi, having recently joined a project with a vast amount of code to get to grips with, I would like to start documenting and visualizing some of the flows through the call graph to give me a better understanding of how everything fits together. This is what I would like to see in my ideal tool: every node is a function/method nodes are connected if one function can call another optional square box in between detailing conditions under which call is made (or a label icon you can hover over like a tooltip) also icon on edge describing parameters hover over node and description is displayed optional icons for node to display pseudo code scenario/domain view - display subset of complete diagram for particular use-case slide view mode - for each frame, the currently executing function is highlighted plenty of options over what to display to reduce on-screen clutter the interactive use of such a tool is key, I'm not looking for a Graphviz type solution because there would be too much clutter. The ability to form a view of a subset of the entire graph would be very handy (maybe with the unimportant clutter greyed out). Don't need automatic generation from source code, happy to enter it manually. Almost like a mind-map. Does that make sense? If you are not aware of such a tool, do you also think it would be useful? (Just in case I decide to go and scratch that itch one day!)

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • Connecting an overloaded PyQT signal using new-style syntax

    - by Claudio
    I am designing a custom widget which is basically a QGroupBox holding a configurable number of QCheckBox buttons, where each one of them should control a particular bit in a bitmask represented by a QBitArray. In order to do that, I added the QCheckBox instances to a QButtonGroup, with each button given an integer ID: def populate(self, num_bits, parent = None): """ Adds check boxes to the GroupBox according to the bitmask size """ self.bitArray.resize(num_bits) layout = QHBoxLayout() for i in range(num_bits): cb = QCheckBox() cb.setText(QString.number(i)) self.buttonGroup.addButton(cb, i) layout.addWidget(cb) self.setLayout(layout) Then, each time a user would click on a checkbox contained in self.buttonGroup, I'd like self.bitArray to be notified so I can set/unset the corresponding bit in the array. For that I intended to connect QButtonGroup's buttonClicked(int) signal to QBitArray's toggleBit(int) method and, to be as pythonic as possible, I wanted to use new-style signals syntax, so I tried this: self.buttonGroup.buttonClicked.connect(self.bitArray.toggleBit) The problem is that buttonClicked is an overloaded signal, so there is also the buttonClicked(QAbstractButton*) signature. In fact, when the program is executing I get this error when I click a check box: The debugged program raised the exception unhandled TypeError "QBitArray.toggleBit(int): argument 1 has unexpected type 'QCheckBox'" which clearly shows the toggleBit method received the buttonClicked(QAbstractButton*) signal instead of the buttonClicked(int) one. So, the question is, how can we specify, using new-style syntax, that self.buttonGroup emits the buttonClicked(int) signal instead of the default overload - buttonClicked(QAbstractButton*)?

    Read the article

  • Magento config XML for adding a controller action to a core admin controller

    - by N. B.
    I'm trying to add a custom action to a core controller by extending it in a local module. Below I have the class definition which resides in magento1_3_2_2/app/code/local/MyCompany/MyModule/controllers/Catalog/ProductController.php class MyCompany_MyModule_Catalog_ProductController extends Mage_Adminhtml_Catalog_ProductController { public function massAttributeSetAction(){ ... } } Here is my config file at magento1_3_2_2/app/code/local/MyCompany/MyModule/etc/config.xml: ... <global> <rewrite> <mycompany_mymodule_catalog_product> <from><![CDATA[#^/catalog_product/massAttributeSet/#]]></from> <to>/mymodule/catalog_product/massAttributeSet/</to> </mycompany_mymodule_catalog_product> </rewrite> <admin> <routers> <MyCompany_MyModule> <use>admin</use> <args> <module>MyCompany_MyModule</module> <frontName>MyModule</frontName> </args> </MyCompany_MyModule> </routers> </admin> </global> ... However, https://example.com/index.php/admin/catalog_product/massAttributeSet/ simply yields a admin 404 page. I know that the module is active - other code is executing fine. I feel it's simply a problem with my xml syntax. Am I going about this the write way? I'm hesitant because I'm not actually rewriting a controller method... I'm adding one entirely. However it does make sense in that, the original admin url won't respond to that action name and it will need to be redirected. I'm using Magento 1.3.2.2 Thanks for any guidance.

    Read the article

  • Importing into a Exported object with MEF

    - by Nathan W
    I'm sorry if this question has already been asked 100 times, but I'm really struggling to get it to work. Say I have have three projects. Core.dll Has common interfaces Shell.exe Loads all modules in assembly folder. References Core.dll ModuleA.dll Exports Name, Version of module. References Core.dll Shell.exe has a [Export] that contains an single instance of a third party application that I need to inject into all loaded modules. So far the code that I have in Shell.exe is: static void Main(string[] args) { ThirdPartyApp map = new ThirdPartyApp(); var ad = new AssemblyCatalog(Assembly.GetExecutingAssembly()); var dircatalog = new DirectoryCatalog("."); var a = new AggregateCatalog(dircatalog, ad); // Not to sure what to do here. } class Test { [Export(typeof(ThirdPartyApp))] public ThirdPartyApp Instance { get; set; } [Import(typeof(IModule))] public IModule Module { get; set; } } I need to create a instance of Test, and load Instance with map from the Main method then load the Module from ModuleA.dll that is in the executing directory then [Import] Instance into the loaded module. In ModuleA I have a class like this: [Export(IModule)] class Module : IModule { [Import(ThirdPartyApp)] public ThirdPartyApp Instance {get;set;} } I know I'm half way there I just don't know how to put it all together, mainly with loading up test with a instance of map from Main. Could anyone help me with this.

    Read the article

  • curl_multi_exec stops if one url is 404, how can I change that?

    - by Rob
    Currently, my cURL multi exec stops if one url it connects to doesn't work, so a few questions: 1: Why does it stop? That doesn't make sense to me. 2: How can I make it continue? EDIT: Here is my code: $SQL = mysql_query("SELECT url FROM shells") ; $mh = curl_multi_init(); $handles = array(); while($resultSet = mysql_fetch_array($SQL)){ //load the urls and send GET data $ch = curl_init($resultSet['url'] . $fullcurl); //Only load it for two seconds (Long enough to send the data) curl_setopt($ch, CURLOPT_TIMEOUT, 5); curl_multi_add_handle($mh, $ch); $handles[] = $ch; } // Create a status variable so we know when exec is done. $running = null; //execute the handles do { // Call exec. This call is non-blocking, meaning it works in the background. curl_multi_exec($mh,$running); // Sleep while it's executing. You could do other work here, if you have any. sleep(2); // Keep going until it's done. } while ($running > 0); // For loop to remove (close) the regular handles. foreach($handles as $ch) { // Remove the current array handle. curl_multi_remove_handle($mh, $ch); } // Close the multi handle curl_multi_close($mh);

    Read the article

  • KODO: how set up fetch plan for bidirectional relationships?

    - by BestPractices
    Running KODO 4.2 and having an issue inefficient queries being generated by KODO. This happens when fetching an object that contains a collection where that collection has a bidrectional relationship back to the first object. Class Classroom { List<Student> _students; } Class Student { Classroom _classroom; } If we create a fetch plan to get a list of Classrooms and their corresponding Students by setting up the following fetch plan: fetchPlan.addField(Classroom.class,”_students”); This will result in two queries (get the classrooms and then get all students that are in those classrooms), which is what we would expect. However, if we include the reference back to the classroom in our fetch plan in order for the _classroom field to get populated by doing fetchPlan.addField(Student.class, “_classroom”), this will result in X number of additional queries where X is the number of students in each classroom. Can anyone explain how to fix this? KODO already has the original Classroom objects at the point that it's executing the queries to retrieve the Classroom objects and set them in each Student object's _classroom field. So I would expect KODO to simply set those objects in the _classroom field on each Student object accordingly and not go back to the database. Once again, the documentation is sorely lacking with Kodo/JDO/OpenJPA but from what I've read it should be able to do this more efficiently. Note-- EAGER_FETCH.PARALLEL is turned on and I have tried this with caching (query and data caches) turned on and off and there is no difference in the resultant queries.

    Read the article

  • document.write Not working when loading external Javascript source

    - by jadent
    I'm trying to load an external JavaScript file dynamically into an HTML element to preview an ad tag. The script loads and executes but the script contains "document.write" which has an issue executing properly but there are no errors. <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.3/jquery.min.js"></script> <script type="text/javascript"> $(function() { source = 'http://ib.adnxs.com/ttj?id=555281'; // DOM Insert Approach // ----------------------------------- var script = document.createElement('script'); script.setAttribute('type', 'text/javascript'); script.setAttribute('src', source); document.body.appendChild(script); }); </script> </head> <body> </body> </html> I can get it to work if If i move the the source to the same domain for testing If the script was modified to use document.createElement and appendChild instead of document.write like the code above. I don't have the ability to modify the script since it being generated and hosted by a 3rd party. Does anyone know why the document.write will not work correctly? And is there a way to get around this? Thanks for the help!

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Backbone.js Collection Iteration Using .each()

    - by the_archer
    I've been doing some Backbone.js coding and have come across a particular problem where I am having trouble iterating over the contents of a collection. The line Tasker_TodoList.each(this.addOne, this);in the addAll function in AppView is not executing properly for some reason, throwing the error: Uncaught TypeError: undefined is not a function the code in question is: $(function() { var Todo = Backbone.Model.extend({ defaults: { title: "Some Title...", status: 0 //not completed } }); var TodoList = Backbone.Collection.extend({ model: Todo, localStorage: new Store('tasker') }); var Tasker_TodoList = new TodoList(); var TodoView = Backbone.View.extend({ tagName: 'li', template: _.template($('#todoTemplate').html()), events: { 'click .delbtn': 'delTodo' }, initialize: function(){ console.log("a new todo initialized"); //this.model.on('change', this.render, this); }, render: function(){ this.$el.html(this.template(this.model.toJSON())); return this; }, delTodo: function(){ console.log("deleted todo"); } }); var AppView = Backbone.View.extend({ el: 'body', events: { 'click #addBtn': 'createOnClick' }, initialize: function(){ Tasker_TodoList.fetch(); Tasker_TodoList.on('add', this.addAll); console.log(Tasker_TodoList); }, addAll: function(){ $('#tasksList').html(''); console.log("boooooooma"); Tasker_TodoList.each(this.addOne, this); }, addOne: function(todo){ console.log(todo); }, createOnClick: function(){ Tasker_TodoList.create(); } }); var Tasker = new AppView(); }); can somebody help me in finding out what I am doing wrong? Thank you all for your help :-)

    Read the article

  • Cannot exit in CodeIgniter constructor

    - by gzg
    I'm basically trying a session control. If the user is logged in, it's ok to move on. But if he's not logged in, then it will show a log-in screen and then die. However, when I use die or exit in the constructor, it does not show the log-in screen; it immediately dies. The code is as following: private $username = null; private $mongoid = null; private $neoid = null; public function __construct(){ parent::__construct(); // session to global $this->username = $this->session->userdata( 'username'); $this->mongoid = $this->session->userdata( 'mongoid'); $this->neoid = $this->session->userdata( 'neoid'); // check if user is logged in if( $this->username == "" || empty( $this->username)){ $this->load->view( 'access/login'); die; } } It shows the log-in page if die is not written there, but with die, it does not show. Why do I want to use die? Because if I don't use, it moves on the index function and I don't want it to execute index function if the user is not logged in. What is wrong here? What should I use to stop executing?

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • Corrupt UTF-8 Characters with PHP 5.2.10 and MySQL 5.0.81

    - by jkndrkn
    We have an application hosted on both a local development server and a live site. We are experiencing UTF-8 corruption issues and are looking to figure out how to resolve them. The system is run using symfony 1.0 with Propel. On our development server, we are running PHP 5.2.0 and MySQL 5.0.32. We do not experience corrupted UTF-8 characters there. On our live site, PHP 5.2.10 and MySQL 5.0.81 is running. On that server, certain characters such as ô´ and S are corrupted once they are stored in the database. The corrupted characters are showing up as either question marks or approximations of the original character with adjacent question marks. Examples of corruption: Uncorrupted: ô´ Corrupted: ô? Uncorrupted: S Corrupted: ? We are currently using the following techniques on both development and live servers: Executing the following queries prior to execution of any other queries: SET NAMES 'utf8' COLLATE 'utf8_unicode_ci' SET CHARSET 'utf8' Setting the <meta> Content-Type value to: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Adding the following to our .htaccess file: AddDefaultCharset utf-8 Using mb_* (multibyte) PHP functions where necessary. Being sure to set database columns to use utf8_unicode_ci collation. These techniques are sufficient for our development site, but do not work on the live site. On the live site I've also tried adding mysql_set_encoding('ut8', $mysql_connection) but this does not help either. I have found some evidence that newer versions of PHP and MySQL are mishandling UTF-8 character encodings.

    Read the article

  • ASP.NET MVC 2.0 + Implementation of a IRouteHandler goes not fire

    - by Peter
    Can anybody please help me with this as I have no idea why public IHttpHandler GetHttpHandler(RequestContext requestContext) is not executing. In my Global.asax.cs I have public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); routes.Add("ImageRoutes", new Route("Images/{filename}", new CustomRouteHandler())); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } } //CustomRouteHandler implementation is below public class CustomRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { // IF I SET A BREAK POINT HERE IT DOES NOT HIT FOR SOME REASON. string filename = requestContext.RouteData.Values["filename"] as string; if (string.IsNullOrEmpty(filename)) { // return a 404 HttpHandler here } else { requestContext.HttpContext.Response.Clear(); requestContext.HttpContext.Response.ContentType = GetContentType(requestContext.HttpContext.Request.Url.ToString()); // find physical path to image here. string filepath = requestContext.HttpContext.Server.MapPath("~/logo.jpg"); requestContext.HttpContext.Response.WriteFile(filepath); requestContext.HttpContext.Response.End(); } return null; } } Can any body tell me what I'm missing here. Simply public IHttpHandler GetHttpHandler(RequestContext requestContext) does not fire. I havn't change anything in the web.config either. What I'm missing here? Please help.

    Read the article

  • Ribbon not showing up with dynamically generated ribbon XML from the database

    - by ydobonmai
    I am trying to form ribbon XML with the data from the database and following is what I wrote:- XNamespace xNameSpace = "http://schemas.microsoft.com/office/2006/01/customui"; XDocument document = new XDocument(); document.Add( new XElement (xNameSpace+"customUI" , new XElement("ribbon" , new XElement("tabs")))); // more code to add the groups and the controls with-in the groups ....... // code below to add ribbon XML to the document and to add the relationship RibbonExtensibilityPart ribbonExtensibilityPart = myDoc.AddNewPart<RibbonExtensibilityPart>(); ribbonExtensibilityPart.CustomUI = new DocumentFormat.OpenXml.Office.CustomUI.CustomUI(ribbonXml.ToString()); myDoc.CreateRelationshipToPart(ribbonExtensibilityPart); I don't see any error executing the above. However, when I open the changed document, I dont see my ribbon added. I see following in the CustomUI/CustomUI.xml inside the word:- <?xml version="1.0" encoding="utf-8"?><customUI xmlns="http://schemas.microsoft.com/office/2006/01/customui"> <ribbon xmlns=""> <tabs> ..... I am not sure how the "xmlns" attribute is getting added to the ribbon element. When I remove that attribute, the ribbon gets showed. Could anybody throw any idea on where am I going wrong?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >