Search Results

Search found 10533 results on 422 pages for 'task organization'.

Page 370/422 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • Get active window title in X

    - by dutt
    I'm trying to get the title of the active window. The application is a background task so if the user has Eclipse open the function returns "Eclipse - blabla", so it's not getting the window title of my own window. I'm developing this in Python 2.6 using PyQt4. My current solution, borrowed and slightly modified from an old answer here at SO, looks like this: def get_active_window_title(): title = '' root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) for j in id_w.stdout: if 'WM_ICON_NAME(STRING)' in j: if title != j.split()[2]: return j.split("= ")[1].strip(' \n\"') It works for most windows, but not all. For example it can't find my kopete chat windows, or the name of the application i'm currently developing. My next try looks like this: def get_active_window_title(self): screen = wnck.screen_get_default() if screen == None: return "Could not get screen" window = screen.get_active_window() if window == None: return "Could not get window" title = window.get_name() return title; But for some reason window is always None. Does somebody have a better way of getting the current window title, or how to modify one of my ways, that works for all windows? Edit: In case anybody is wondering this is the way I found that seems to work for all windows. def get_active_window_title(self): root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) id_w.wait() buff = [] for j in id_w.stdout: buff.append(j) for line in buff: match = re.match("WM_NAME\((?P<type>.+)\) = (?P<name>.+)", line) if match != None: type = match.group("type") if type == "STRING" or type == "COMPOUND_TEXT": return match.group("name") return "Active window not found"

    Read the article

  • How do you clear your mind after 8-10 hours per day of coding?

    - by Bryan
    Related Question- Ways to prepare your mind before coding?. I'm having a hard time taking my mind off of work projects in my personal time. It's not that I have a stressful job or tight deadlines; I love my job. I find that after spending the whole day writing code & trying to solve problems, I have an extremely hard time getting it out of my mind. I'm constantly thinking about the current project/problem/task all the time. It's keeping me from relaxing, and in the long run it just builds stress. Personal projects help to some extent, but mostly just to distract me. I still have source code bouncing around my head 16 hours a day. I'm still relatively new to the workforce. Have you struggled with this, perhaps as a young developer? How did you overcome it? Can anyone offer general advice on winding down after a long programming session?

    Read the article

  • MySQL query against pseudo-key-value pair data in WordPress custom query

    - by andrevr
    I'm writing a custom WordPress query to use some of the data which the Woothemes Diarise theme creates. Diarise is an event planner theme with calendar blah, blah... and uses custom fields to store the event start and end dates in WP custom fields in the *wp_postmeta* table, which implements a key-value store. So for each post in the "event" category, there are 2 records in *wp_postmeta*, named *event_start_date* and *event_end_date* that I'm interested in. The task is to compare a tourist's arrival and departure dates with the start and end dates of events, yielding a what's on list of events available. We thought we'd killed it with a grand flash of logic, that goes like this: Disregard any event that ends before the tourist arrives, and any that begin after the departure date. I wrote this query: SELECT wposts.* FROM wp_posts wposts LEFT JOIN wp_postmeta wpostmeta ON wposts.ID = wpostmeta.post_id LEFT JOIN wp_term_relationships ON (wposts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON (wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id) WHERE wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN(3,4) AND ( wpostmeta.meta_key = 'event_start_date' AND NOT ( concat(subst(wpostmeta.meta_value,7,4),'-',subst(wpostmeta.meta_value,4,2),'-',subst(wpostmeta.meta_value,1,2) > '2010-07-31' ) ) AND ( wpostmeta.meta_key = 'event_end_date' AND NOT ( concat(subst(wpostmeta.meta_value,7,4),'-',subst(wpostmeta.meta_value,4,2),'-',subst(wpostmeta.meta_value,1,2) < '2010-05-01' ) ) ) ORDER BY wpostmeta.meta_value ASC And, of course it returns no records. The problem I believe is in the dual reference to wpostmeta.meta_key, but how to get around that?

    Read the article

  • Dynamic table memory usage

    - by Dan
    I use a dynamic table: <html> <body> <button id="button">Build table</button> <div id="container"> <script type="text/javascript"> window.onload=function(){ var table = null; var row = "<tr><td>111111111111111111111111111111111111111111111111111111</td>" + "<td>222222222222222222222222222222222222222222222222222222</td>" + "<td>333333333333333333333333333333333333333333333333333333</td></tr>"; var data = null; for (var i = 0; i < 2000; i++){ data += row; } var obj = document.getElementById("button"); obj.onclick=function buildTable(){ document.getElementById("container").innerHTML = "<div><table><tbody>" + data + "</tbody></table></div>"; }; }; </script> </body> </html> Using chromes task manager, each time new data is loaded the memory usage increases considerably and doesn't go down, so after some time the app consumes a lot of memory and requires the browser to be closed. Is there any change in the code I can use to solve this or is it a browser side problem?

    Read the article

  • What is the best way to find a processed memory allocations in terms of C# objects

    - by Shantaram
    I have written various C# console based applications, some of them long running some not, which can over time have a large memory foot print. When looking at the windows perofrmance monitor via the task manager, the same question keeps cropping up in my mind; how do I get a break down of the number objects by type that are contributing to this footprint; and which of those are f-reachable and those which aren't and hence can be collected. On numerous occasions I've performed a code inspection to ensure that I am not unnecessarily holding on objects longer than required and disposing of objects with the using construct. I have also recently looked at employing the CG.Collect method when I have released a large number of objects (for example held in a collection which has just been cleared). However, I am not so sure that this made that much difference, so I threw that code away. I am guessing that there are tools in sysinternals suite than can help to resolve these memory type quiestions but I am not sure which and how to use them. The alternative would be to pay for a third party profiling tool such as JetBrains dotTrace; but I need to make sure that I've explored the free options first before going cap in hand to my manager.

    Read the article

  • Controller actions appear to be synchronous though on different requests?

    - by Oded
    I am under the impression that the below code should work asynchronously. However, when I am looking at firebug, I see the requests fired asynchronously, but the results coming back synchronously: Controller: [HandleError] public class HomeController : Controller { public ActionResult Status() { return Content(Session["status"].ToString()); } public ActionResult CreateSite() { Session["status"] += "Starting new site creation"; Thread.Sleep(20000); // Simulate long running task Session["status"] += "<br />New site creation complete"; return Content(string.Empty); } } Javascript/jQuery: $(document).ready(function () { $.ajax({ url: '/home/CreateSite', async: true, success: function () { mynamespace.done = true; } }); setTimeout(mynamespace.getStatus, 2000); }); var mynamespace = { counter: 0, done: false, getStatus: function () { $('#console').append('.'); if (mynamespace.counter == 4) { mynamespace.counter = 0; $.ajax({ url: '/home/Status', success: function (data) { $('#console').html(data); } }); } if (!mynamespace.done) { mynamespace.counter++; setTimeout(mynamespace.getStatus, 500); } } } Addtional information: IIS 7.0 Windows 2008 R2 Server Running in a VMWare virutual machine Can anyone explain this? Shouldn't the Status action be returning practically immediately instead of waiting for CreateSite to finish? Edit: How can I get the long running process to kick off and still get status updates?

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • Hundreds of custom UserControls create thousands of USER Objects

    - by Andy Blackman
    I'm creating a dashboard application that shows hundreds of "items" on a FlowLayoutPanel. Each "item" is a UserControl that is made up of 12 or labels. My app queries a database and then creates an "item" instance for each record, populating ethe labels and textboxes with data before adding it to the FlowLayoutPanel. After adding about 560 items to the panel, I noticed that the USER Objects count in my Task Manager had gone up to about 7300, which was much much larger than any other app on my machine. I did a quick spot of mental arithmetic (OK, I might have used calc.exe) and figured that 560 * 13 (12 labels plus the UserControl itself) is 7280. So that suddenly gave away where all the objects were coming from... Knowing that there is a 10,000 USER object limit before windows throws in the towel, I'm trying to figure better ways of drawing these items onto the FlowLayoutPanel. My ideas so far are as follows: 1) User-draw the "item", using graphics.DrawText and DrawImage in place of many of the labels. I'm hoping that this will mean 1 item = 1 USER Object, not 13. 2) Have 1 instance of the "item", then for each record, populate the instance and use the Control.DrawToBitmap() method to grab an image and then use that in the FlowLayoutPanel (or similar) So... Does anyone have any other suggestions ??? P.S. It's a zoomable interface, so I have already ruled out "Paging" as there is a requirement to see all items at once :( Thanks everyone.

    Read the article

  • Compromising design & code quality to integrate with existing modules

    - by filip-fku
    Greetings! I inherited a C#.NET application I have been extending and improving for a while now. Overall it was obviously a rush-job (or whoever wrote it was seemingly less competent than myself). The app pulls some data from an embedded device & displays and manipulates it. At the core is a communications thread in the main application form which executes a 600+ lines of code method which calls functions all over the place, implementing a state machine - lots of if-state-then-do type code. Interaction with the device is done by setting the state/mode globally and letting the thread do it's thing. (This is just one example of the badness of the code - overall it is not very OO-like, it reminds of the style of embedded C code the device firmware is written in). My problem is that this piece of code is central to the application. The software, communications protocol or device firmware are not documented at all. Obviously to carry on with my work I have to interact with this code. What I would like some guidance on, is whether it is worth scrapping this code & trying to piece together something more reasonable from the information I can reverse engineer? I can't decide! The reason I don't want to refactor is because the code already works, and changing it will surely be a long, laborious and unpleasant task. On the flip side, not refactoring means I have to sometimes compromise the design of other modules so that I may call my code from this state machine! I've heard of "If it ain't broke don't fix it!", so I am wondering if it should apply when "it" is influencing the design of future code! Any advice would be appreciated! Thanks!

    Read the article

  • How to handle Win+Shift+LEft/Right on Win7 with custom WM_GETMINMAXINFO logic?

    - by Steven Robbins
    I have a custom windows implementation in a WPF app that hooks WM_GETMINMAXINFO as follows: private void MaximiseWithTaskbar(System.IntPtr hwnd, System.IntPtr lParam) { MINMAXINFO mmi = (MINMAXINFO)Marshal.PtrToStructure(lParam, typeof(MINMAXINFO)); System.IntPtr monitor = MonitorFromWindow(hwnd, MONITOR_DEFAULTTONEAREST); if (monitor != System.IntPtr.Zero) { MONITORINFO monitorInfo = new MONITORINFO(); GetMonitorInfo(monitor, monitorInfo); RECT rcWorkArea = monitorInfo.rcWork; RECT rcMonitorArea = monitorInfo.rcMonitor; mmi.ptMaxPosition.x = Math.Abs(rcWorkArea.left - rcMonitorArea.left); mmi.ptMaxPosition.y = Math.Abs(rcWorkArea.top - rcMonitorArea.top); mmi.ptMaxSize.x = Math.Abs(rcWorkArea.right - rcWorkArea.left); mmi.ptMaxSize.y = Math.Abs(rcWorkArea.bottom - rcWorkArea.top); mmi.ptMinTrackSize.x = Convert.ToInt16(this.MinWidth * (desktopDpiX / 96)); mmi.ptMinTrackSize.y = Convert.ToInt16(this.MinHeight * (desktopDpiY / 96)); } Marshal.StructureToPtr(mmi, lParam, true); } It all works a treat and it allows me to have a borderless window maximized without having it sit on to of the task bar, which is great, but it really doesn't like being moved between monitors with the new Win7 keyboard shortcuts. Whenever the app is moved with Win+Shift+Left/Right the WM_GETMINMAXINFO message is received, as I'd expect, but MonitorFromWindow(hwnd, MONITOR_DEFAULTTONEAREST) returns the monitor the application has just been moved FROM, rather than the monitor it is moving TO, so if the monitors are of differing resolutions the window end up the wrong size. I'm not sure if there's something else I can call, other then MonitorFromWindow, or whether there's a "moving monitors" message I can hook prior to WM_GETMINMAXINFO. I'm assuming there is a way to do it because "normal" windows work just fine.

    Read the article

  • Database choices

    - by flobadob
    I have a prickly design issue regarding the choice of database technologies to use for a group of new applications. The final suite of applications would have the following database requirements... Central databases (more than one database) using mysql (myst be mysql due to justhost.com). An application to be written which accesses the multiple mysql databases on the web host. This application will also write to local serverless database (sqlite/firebird/vistadb/whatever). Different flavors of this application will be created for windows (.NET), windows mobile, android if possible, iphone if possible. So, the design task is to minimise the quantity of code to achieve this. This is going to be tricky since the languages used are already c# / java (android) and objc (iphone). Not too worried about that, but can the work required to implement the various database access layers be minimised? The serverless database will hold similar data to the mysql server, so some kind of inheritance in the DAL would be useful. Looking at hibernate/nhibernate and there is linq to whatever. So many choices!

    Read the article

  • MACRO compilation PROBLEM

    - by wildfly
    i was given a primitive task to find out (and to put in cl) how many nums in an array are bigger than the following ones, (meaning if (arr[i] arr[i+1]) count++;) but i've problems as it has to be a macro. i am getting errors from TASM. can someone give me a pointer? SortA macro a, l LOCAL noes irp reg, <si,di,bx> push reg endm xor bx,bx xor si,si rept l-1 ;;also tried rept 3 : wont' compile mov bl,a[si] inc si cmp bl,arr[si] jb noes inc di noes: add di,0 endm mov cx,di irp reg2, <bx,di,si> pop reg2 endm endm dseg segment arr db 10,9,8,7 len = 4 dseg ends sseg segment stack dw 100 dup (?) sseg ends cseg segment assume ds:dseg, ss:sseg, cs:cseg start: mov ax, dseg mov ds,ax sortA arr,len cseg ends end start errors: Assembling file: sorta.asm **Error** sorta.asm(51) REPT(4) Expecting pointer type **Error** sorta.asm(51) REPT(6) Symbol already different kind: NOES **Error** sorta.asm(51) REPT(10) Expecting pointer type **Error** sorta.asm(51) REPT(12) Symbol already different kind: NOES **Error** sorta.asm(51) REPT(16) Expecting pointer type **Error** sorta.asm(51) REPT(18) Symbol already different kind: NOES Error messages: 6

    Read the article

  • How to exclude tags folder from triggering build in Teamcity?

    - by Jaya mareedu
    Hello, I recently installed Teamcity 5.0.3. I am trying to setup automated build for a .NET 2.0 VS2005 project. I use NAnt and MSBuild task to perform the build. The project structure is a typical SVN structure svn://localhost/ITools is my repository and the project structure is VisualTrack trunk branches tags I created a new project in Teamcity and then created a build configuration for that project. I asked it to kick off a build everytime there is a change detected in SVN VisualTrack VCS. I also configured it to create a label in VisualTrack/tags for every successful build. The problem I am running into is that the build is getting trigerred everytime teamcity is creating a new label under tags. I only want the build to be triggered if some developer commits his or her changes into trunk. Next step I took was to create a build trigger rule to exclude the tags path by specifying a trigger pattern as -:VisualTrack/tags/**, but looks like its not working. I believe the pattern I specified is not correct. Can someone please help me resolve this issue? Thanks, Jaya.

    Read the article

  • Is there a recommended way to communicate scientific/engineering programming to C developers?

    - by ggkmath
    Hi, I have a lot of MATLAB code that needs to get ported to C (execution speed is critical for this work) as part of a back-end process for a web application. When I attempt to outsource this code to a C developer, I assume (correct me if I'm wrong) few C developers also understand MATLAB code (things like indexing and memory management are different, etc.). I wonder if there are any C developers out there that can recommend a procedure for me to follow to best communicate what the code does? For example, should I provide the MATLAB code and explain what it's doing line by line? Or, should I just provide the math/algorithm, explain it in plain English, and let the C developer implement it with this understanding in his/her own way (e.g. can I assume the developer understands how to work with complex math (i.e. imaginary numbers), how to generate histograms, perform an FFT, etc.)? Or, is there a better method? I expect I'm not the first to need to do this, so I wonder if any C developers out there ran into this situation and can share any conventional wisdom how they'd like this task to be transferred? Thanks in advance for any comments.

    Read the article

  • How to Bind a Command in WPF

    - by MegaMind
    Sometimes we used complex ways so many times, we forgot the simplest ways to do the task. I know how to do command binding, but i always use same approach. Create a class that implements ICommand interface and from the view model i create new instance of that class and binding works like a charm. This is the code that i used for command binding public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); DataContext = this; testCommand = new MeCommand(processor); } ICommand testCommand; public ICommand test { get { return testCommand; } } public void processor() { MessageBox.Show("hello world"); } } public class MeCommand : ICommand { public delegate void ExecuteMethod(); private ExecuteMethod meth; public MeCommand(ExecuteMethod exec) { meth = exec; } public bool CanExecute(object parameter) { return false; } public event EventHandler CanExecuteChanged; public void Execute(object parameter) { meth(); } } But i want to know the basic way to do this, no third party dll no new class creation. Do this simple command binding using a single class. Actual class implements from ICommand interface and do the work.

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • Performing measures within the execution of a c++ code every t milliseconds

    - by user506901
    Given a while loop and the function ordering as follows: int k=0; int total=100; while(k<total){ doSomething(); if(approx. t milliseconds elapsed) { measure(); } ++k; } I want to perform 'measure' every t-th milliseconds. However, since 'doSomething' can be close to the t-th millisecond from the last execution, it is acceptable to perform the measure after approximately t milliseconds elapsed from the last measure. My question is: how could this be achieved? One solution would be to set timer to zero, and measure it after every 'doSomething'. When it is withing the acceptable range, I perform measures, and reset. However, I'm not which c++ function I should use for such a task. As I can see, there are certain functions, but the debate on which one is the most appropriate is outside of my understanding. Note that some of the functions actually take into account the time taken by some other processes, but I want my timer to only measure the time of the execution of my c++ code (I hope that is clear). Another thing is the resolution of the measurements, as pointed out below. Suppose the medium option of those suggested.

    Read the article

  • Oracle 10.1 and 11.2 produce different XML using the same statement

    - by MindFyer
    I am migrating a database from Oracle 10.1 to 11.2 and I have the following problem. The statement SELECT '<?xml version="1.0" encoding="utf-8" ?>' || (Xml).getClobVal() AS XmlClob FROM ( SELECT XmlElement( "Element1", ( SELECT XmlAgg(tpx.Xml) FROM ( SELECT XmlElement("Element3",XmlForest('content' as Element4)) AS Xml FROM dual ) tpx ) AS "Element2" ) AS Xml FROM dual ) On the original 10.1 database produces XML like this... <?xml version="1.0" encoding="utf-8"?> <Element1> <Element2> <Element3> <ELEMENT4>content</ELEMENT4> </Element3> </Element2> </Element1> On the new 11.2 system it looks like this... <?xml version="1.0" encoding="utf-8"?> <Element1> <Element3> <ELEMENT4>content</ELEMENT4> </Element3> </Element1> Is there some environmental variable I am missing that tells Oracle how to format its XML. There are hundreds of thousands of lines of PL/SQL in the database; it would be a mammoth task to rewrite if it turned out they had changed they way Oracle formats XML between versions. Hopefully someone has come accross this before. Thanks

    Read the article

  • Vim: change formatting of variables in a script

    - by sixtyfootersdude
    I am using vim to edit a shell script (did not use the right coding standard). I need to change all of my variables from camel-hum-notation startTime to caps-and-underscore-notation START_TIME. I do not want to change the way method names are represented. I was thinking one way to do this would be to write a function and map it to a key. The function could do something like generating this on the command line: s/<word under cursor>/<leave cursor here to type what to replace with> I think that this function could be applyable to other situations which would be handy. Two questions: Question 1: How would I go about creating that function. I have created functions in vim before the biggest thing I am clueless about is how to capture movement. Ie if you press dw in vim it will delete the rest of a word. How do you capture that? Also can you leave an uncompleted command on the vim command line? Question 2: Got a better solution for me? How would you approach this task?

    Read the article

  • Update table using SSIS

    - by thursdaysgeek
    I am trying to update a field in a table with data from another table, based on a common key. If it were in straight SQL, it would be something like: Update EHSIT set e.IDMSObjID = s.IDMSObjID from EHSIT e, EHSIDMS s where e.SITENUM = s.SITE_CODE However, the two tables are not in the same database, so I'm trying to use SSIS to do the update. Oh, and the sitenum/site_code are varchar in one and nvarchar in the other, so I'll have to do a data conversion so they'll match. How do I do it? I have a data flow object, with the source as EHSIDMS and the destination as EHSIT. I have a data conversion to convert the unicode to non-unicode. But how do I update based on the match? I've tried with the destination, using a SQL Command as the Data Access mode, but it doesn't appear to have the source table. If I just map the field to be updated, how does it limit it based on fields matching? I'm about to export my source table to Excel or something, and then try inputting from there, although it seems that all that would get me would be to remove the data conversion step. Shouldn't there be an update data task or something? Is it one of those Data Flow transformation tasks, and I'm just not figuring out which it is?

    Read the article

  • do the Python libraries have a natural dependence on the global namespace?

    - by msw
    I first ran into this when trying to determine the relative performance of two generators: t = timeit.repeat('g.get()', setup='g = my_generator()') So I dug into the timeit module and found that the setup and statement are evaluated with their own private, initially empty namespaces so naturally the binding of g never becomes accessible to the g.get() statement. The obvious solution is to wrap them into a class, thus adding to the global namespace. I bumped into this again when attempting, in another project, to use the multiprocessing module to divide a task among workers. I even bundled everything nicely into a class but unfortunately the call pool.apply_async(runmc, arg) fails with a PicklingError because buried inside the work object that runmc instantiates is (effectively) an assignment: self.predicate = lambda x, y: x > y so the whole object can't be (understandably) pickled and whereas: def foo(x, y): return x > y pickle.dumps(foo) is fine, the sequence bar = lambda x, y: x > y yields True from callable(bar) and from type(bar), but it Can't pickle <function <lambda> at 0xb759b764>: it's not found as __main__.<lambda>. I've given only code fragments because I can easily fix these cases by merely pulling them out into module or object level defs. The bug here appears to be in my understanding of the semantics of namespace use in general. If the nature of the language requires that I create more def statements I'll happily do so; I fear that I'm missing an essential concept though. Why is there such a strong reliance on the global namespace? Or, what am I failing to understand? Namespaces are one honking great idea -- let's do more of those!

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >