Search Results

Search found 621 results on 25 pages for 'optimizer transformations'.

Page 21/25 | < Previous Page | 17 18 19 20 21 22 23 24 25  | Next Page >

  • Why is MySQL with InnoDB doing a table scan when key exists and choosing to examine 70 times more ro

    - by andysk
    Hello, I'm troubleshooting a query performance problem. Here's an expected query plan from explain: mysql> explain select * from table1 where tdcol between '2010-04-13:00:00' and '2010-04-14 03:16'; +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | 1 | SIMPLE | table1 | range | tdcol | tdcol | 8 | NULL | 5437848 | Using where | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ 1 row in set (0.00 sec) That makes sense, since the index named tdcol (KEY tdcol (tdcol)) is used, and about 5M rows should be selected from this query. However, if I query for just one more minute of data, we get this query plan: mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17'; +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | 1 | SIMPLE | table1 | ALL | tdcol | NULL | NULL | NULL | 381601300 | Using where | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ 1 row in set (0.00 sec) The optimizer believes that the scan will be better, but it's over 70x more rows to examine, so I have a hard time believing that the table scan is better. Also, the 'USE KEY tdcol' syntax does not change the query plan. Thanks in advance for any help, and I'm more than happy to provide more info/answer questions.

    Read the article

  • return sql query in xml format in python

    - by Ramy
    When I first started working at the company that i work at now, I created a java application that would run batches of jasper-reports. In order to determine which parameters to use for each report in the set of reports, I run a sql query (on sqlserver). I wrote the application to take an xml file with a set of parameters for each report to be generated in the set. so, my process has become, effectively, three steps: run the sql query and return the results in XML format (using 'for XML auto') run the results of the sql query through an XSLT transformation so the xml is formatted in such a way that is friendly with the java application i wrote. run the java application with that final xml file As you can imagine, what I'd like to do is accomplish these steps in python, but i'm not quite sure how to get started. I know how to run an SQL query in Python. I see plenty of documentation about how to write your own xml document with Python. I even see documentation for xsl transformations in python. the big question is how to get the results of the sql query in XML through python. Any and all pointers would be very valuable. Thanks, _Ramy

    Read the article

  • .Net Architecture challenge: The Change-prone Frankestein Model

    - by SDReyes
    Good Morning SO! We've been scratching our heads with with this interesting scenario at the office, and we're anxious to hear your ideas and approaches: We have a database, whose schema is prone to changes -lets call it Prony-. (is used to store configuration parameters for embedded devices. so if the embedded devices guy need a new table, property or relationship for the model, he should be able to adapt the schema in a easy way -happens so often- ). Prony needs a web interface to create/edit its data. We have another database containing data that also need to be loaded to the devices, after making some transformations - lets call this one Oddy- (this data it's generated by an already existent administrative web application). Finally we have Tracy, a server that communicates our DBs and our embedded devices. She should to auto-adapt herself, to our dbs schema changes and serialize the data to the devices. Nice puzzle, don't think so? : ) Our current candidates: Rady: The fast Lets create some views in Prony that make the data transformation from Oddy. then use DynamicData (or some RAD tool) to create/update a simple web interface for Prony (so he can even consult the transformated data from coming from Prony : ). About Tracy, she will need to be recompiled to update her DB schema (Entity framework should work) and use Reflection to explore recursively the schema and serialize data. Cons: We would have to recompile Tracy and the Prony's web interface. What do you think of the candidate(s)? What would you do?

    Read the article

  • Is there stl and utf8 friendly C++ Wrapper for ICU, or other powerful unicode library

    - by artyom
    Hello, I need a good Unicode library for C++. I need Transformations in Unicode sensitive way. For example sort all strings in case insensitive way and get their first characters for index. Convert to upper and to lower various Unicode strings. Split text in reasonable position -- words that would work for Chinese and Japanese as well. Formatting numbers, dates in locale sensitive way (should be thread safe). Transparent support of utf8 (primary internal representation). As far as I know the best library is ICU. However, I can't find normal developer friendly API documentation with examples. Also as far as I see, it is not too friendly with modern C++ design, work with STL and so on. Like this std::string msg; unistring umsg.from_utf8(msg); unistring::word_iterator wi; for(wi=umsg.words().begin(),n=0;wi!=usmg.words().wi_end(),n<10;++wi,++n) ; msg=umsg.substr(umsg.words().begin(),wi).to_utf8(); cout<<_("Five 10 words are ")<<msg; Does anybody know good STL friendly ICU wrapper released under Open Source license preferred permissive like MIT or Boost, but others LGPLv2 compatible are ok as well. Is there another high quality library similar to ICU? Platform: UNIX/POSIX, Windows support is not required. Thanks, Artyom Edit: Unfortunatly I wasn't logged in so I can't make asnver accepted... I had attached the ansver by myself.

    Read the article

  • C# vs C - Big performance difference

    - by John
    I'm finding massive performance differences between similar code in C anc C#. The C code is: #include <stdio.h> #include <time.h> #include <math.h> main() { int i; double root; clock_t start = clock(); for (i = 0 ; i <= 100000000; i++){ root = sqrt(i); } printf("Time elapsed: %f\n", ((double)clock() - start) / CLOCKS_PER_SEC); } And the C# (console app) is: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { DateTime startTime = DateTime.Now; double root; for (int i = 0; i <= 100000000; i++) { root = Math.Sqrt(i); } TimeSpan runTime = DateTime.Now - startTime; Console.WriteLine("Time elapsed: " + Convert.ToString(runTime.TotalMilliseconds/1000)); } } } With the above code, the C# completes in 0.328125 seconds (release version) and the C takes 11.14 seconds to run. The c is being compiled to a windows executable using mingw. I've always been under the assumption that C/C++ were faster or at least comparable to C#.net. What exactly is causing the C to run over 30 times slower? EDIT: It does appear that the C# optimizer was removing the root as it wasn't being used. I changed the root assignment to root += and printed out the total at the end. I've also compiled the C using cl.exe with the /O2 flag set for max speed. The results are now: 3.75 seconds for the C 2.61 seconds for the C# The C is still taking longer, but this is acceptable

    Read the article

  • Using JavaCC to infer semantics from a Composite tree

    - by Skice
    Hi all, I am programming (in Java) a very limited symbolic calculus library that manages polynomials, exponentials and expolinomials (sums of elements like "x^n * e^(c x)"). I want the library to be extensible in the sense of new analytic forms (trigonometric, etc.) or new kinds of operations (logarithm, domain transformations, etc.), so a Composite pattern that represent the syntactic structure of an expression, together with a bunch of Visitors for the operations, does the job quite well. My problem arise when I try to implement operations that depends on the semantics more than on the syntax of the Expression (like integrals, for instance: there are a lot of resolution methods for specific classes of functions, but these same classes can be represented with more than a single syntax). So I thought I need something to "parse" the Composite tree to infer its semantics in order to invoke the right integration method (if any). Someone pointed me to JavaCC, but all the examples I've seen deal only with string parsing; so, I don't know if I'm digging in the right direction. Some suggestions? (I hope to have been clear enough!)

    Read the article

  • Eclipse Java Content Assist not working

    - by Damo
    The Content Assist in Eclipse 3.4 and 3.5 has stopped working for me. When I type in the first few characters of a class and hit CRTL-space then after a delay I get the following error message It does not matter what proposals I enable/disable, I will get this (or similar) message. I have tried: Changing the Xms/Xmx values Starting Eclipse with -clean Creating a new workspace and importing my projects However none of these have worked. I have seen some posts suggesting that other apps may be taking over the CRTL-space or otherwise interfering, however I have nothing aside from a fresh Eclipse running and the problem persists. My problem is very similar to the one covered in this post albeit on a later version and on OSX 10.5.7. Does anyone have any suggestions for how this might be resolved? Thanks. UPDATE: To anyone interested I've had the best results by using Eclipse 3.5 Classic (ie. doesn't include Mylyn). I've also used the settings specified in the bug reports linked to by VonC below. Interestingly Classic doesn't come with some views eg. Snippets, but these are easy to drop in from another distro. UPDATE 2: This problem actually persisted even with the latest versions of Eclipse (3.6 M1). It is caused by a large JAR file generated my Altova Mapforce to handle EDIFACT transformations in our application. It is reproducible by adding this JAR to the buildpath and no changes to Content Assist settings help. The bug (and JAR) can be seen at https://bugs.eclipse.org/bugs/show_bug.cgi?id=289057

    Read the article

  • Emulating Visual Studio's Web Application "publish" at the command line

    - by cbp
    Hi, I am trying to automate my deployment process and am now thoroughly confused. I know that there are many questions on stackoverflow about this, but they all have different solutions and none of them work. I have a Web Application project which I usually publish by right-clicking and selecting "Publish". I get a dialog box where I use the following options: Build configuration: Release Publish method: File system Target location: C:\Deployments\MyWebsite Replace matching files with local copies I should mention that in the properties of the project I have "Items to deploy" set to "Only files needed to run this application". After running this, my entire solution is built, dependencies are resolved, build events are run, web.config transformations are applied and the website is copied to C:\Deployments\MyWebsite, although non-required files such as code-behind files are not copied. I have not been able to replicate this... in fact at this stage I'm not even sure which command line tool am I supposed to be using - msbuild, msdeploy or aspnet_compiler? This guy asks almost the same question but his solution doesn't work at all. For example, build events do not run correctly because the macros are not resolved. Whats more, the files do not get copied into the correct directory at all... I can't even begin to explain what happens!

    Read the article

  • XSL Reuse? YES! But: Element must not contain an xsl:import element! :-(

    - by Fedor Steeman
    I am using a heavy stylesheet with a lot of recurring transformations, so I thought it would be smart to reuse the same chunks of code, so I would not need to make the same changes at a bunch of different places. So I discovered , but -alas- it won't allow me to do it. When trying to run it in Sonic Workbench I get the following error: An xsl:for-each element must not contain an xsl:import element This is my stylesheet code: <xsl:template match="/"> <InboundFargoMessage> <EdiSender> <xsl:value-of select="TransportInformationMessage/SenderId"/> </EdiSender> <EdiReceiver> <xsl:value-of select="TransportInformationMessage/RecipientId"/> </EdiReceiver> <EdiSource>PORLOGIS</EdiSource> <EdiDestination>FARGO</EdiDestination> <Transportations> <xsl:for-each select="TransportInformationMessage/TransportUnits/TransportUnit"> <xsl:import href="TransportCDMtoFDM_V0.6.xsl"/> </xsl:for-each> <xsl:for-each select="TransportInformationMessage/Waybill/TransportUnits/TransportUnit"> <xsl:import href="TransportCDMtoFDM_V0.6.xsl"/> </xsl:for-each> </Transportations> </InboundFargoMessage> </xsl:template> </xsl:stylesheet> I will leave out the child xsl-sheets for now, as the problem appears to be happening at the base. If I cannot use xsl:import, is there any option of reuse?

    Read the article

  • Upload using python script takes very long on one laptop as compared to another

    - by Engr Am
    I have a python 2.7 code which uses STORBINARY function for uploading files to an ftp server and RETRBINARY for downloading from this server. However, the issue is the upload is taking a very long time on three laptops from different brands as compared to a Dell laptop. The strange part is when I manually upload any file, it takes the same time on all the systems. The manual upload rate and upload rate with the python script is the same on the Dell Laptop. However, on every other brand of laptop (I have tried with IBM, Toshiba, Fujitsu-Siemens) the python script has a very low upload rate than the manual attempt. Also, on all these other laptops, the upload rate using the python script is the same (1Mbit/s) while the manual upload rate is approx. 8 Mbit/s. I have tried to vary the filesize for the upload to no avail. TCP Optimizer improved the download rate on all the systems but had no effect on the upload rate. Download rate using this script on all the systems is fine and same as the manual download rate. I have checked the server and it has more than 90% free space. The network connection is the same for all the laptops, and I try uploading only with one laptop at a time. All the laptops have almost the same system configurations, same operating system and approximately the same free drive space. If anything the Dell laptop is a little less in terms of processing power and RAM than 2 of the others, but I suppose this has no effect as I have checked many times to see how much was the CPU usage and network usage during these uploads and downloads, and I am sure that no other virus or program has been eating up my bandwidth. Here is the code ('ftp' and 'file_path' are inputs to the function): path,filename=os.path.split(file_path) filesize=os.path.getsize(file_path) deffilesize=(filesize/1024)/1024 f = open(file_path, "rb") upstart = time.clock() print ftp.storbinary("STOR "+filename, f) upende = time.clock()-upstart outname="Upload " f.close() return upende, deffilesize, outname

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • 3D Animation Rotating and Translating simultaneously in WPF

    - by sklitzz
    Hi, I have ModelVisual3D of a cube. I want to translate and rotate it at the same time. I wish the center of rotation to be in the middle of the cube(the cube rotates around its own axis). But when I try to do this applying both transformations the effect is not what you would expect. Since the object is translating the center of rotation is different thus making it move and rotate in a strange way. How do I get the desired effect? Transform3DGroup transGroup = new Transform3DGroup(); DoubleAnimation cardAnimation = new DoubleAnimation(); cardAnimation.From = 0; cardAnimation.To = 3; cardAnimation.Duration = new Duration(TimeSpan.FromSeconds(2)); Transform3D transform = new TranslateTransform3D(0,0,0); transGroup.Children.Add(transform); RotateTransform3D rotateTransform = new RotateTransform3D(); AxisAngleRotation3D rotateAxis = new AxisAngleRotation3D(new Vector3D(0, 1, 0), 180); Rotation3DAnimation rotateAnimation = new Rotation3DAnimation(rotateAxis, TimeSpan.FromSeconds(2)); rotateAnimation.DecelerationRatio = 0.8; transGroup.Children.Add(rotateTransform); Model.Transform = transGroup; transform.BeginAnimation(TranslateTransform3D.OffsetXProperty, cardAnimation); rotateTransform.BeginAnimation(RotateTransform3D.RotationProperty, rotateAnimation);

    Read the article

  • How can I update a record using a correlated subquery?

    - by froadie
    I have a function that accepts one parameter and returns a table/resultset. I want to set a field in a table to the first result of that recordset, passing in one of the table's other fields as the parameter. If that's too complicated in words, the query looks something like this: UPDATE myTable SET myField = (SELECT TOP 1 myFunctionField FROM fn_doSomething(myOtherField) WHERE someCondition = 'something') WHERE someOtherCondition = 'somethingElse' In this example, myField and myOtherField are fields in myTable, and myFunctionField is a field return by fn_doSomething. This seems logical to me, but I'm getting the following strange error: 'myOtherField' is not a recognized OPTIMIZER LOCK HINTS option. Any idea what I'm doing wrong, and how I can accomplish this? *UPDATE: * Based on Anil Soman's answer, I realized that the function is expecting a string parameter and the field being passed is an integer. I'm not sure if this should be a problem as an explicit call to the function using an integer value works - e.g. fn_doSomething(12345) seems to automatically cast the number to an string. However, I tried to do an explicit cast: UPDATE myTable SET myField = (SELECT TOP 1 myFunctionField FROM fn_doSomething(CAST(myOtherField AS varchar(1000))) WHERE someCondition = 'something') WHERE someOtherCondition = 'somethingElse' Now I'm getting the following error: Line 5: Incorrect syntax near '('.

    Read the article

  • How to display many SVGs in Java with high performance

    - by Oak
    What I want My goal is to be able to display a large number of SVG images on a single drawing area in Java, each with its own translation/rotation/scale values. I'm looking for the simplest solution allowing this, optionally even using OpenGL to speed things up. What I've Tried My initial naive approach was to use SVGSalamander to draw directly on a JPanel, but the performance was pathetic. I poked around around and learned that I should do something like manually convert each SVG into a BufferedImage created with createCompatibleImage, then do the transformations I want, then draw it using double buffering. I ran into some troubles here, and before I continued I tried looking for frameworks to simplify things. What I've Looked At I've been a bit overwhelmed by the available options, which is why I'm turning to SO for help. I've looked at: Cairo (with Glitz maybe?) Libart - not sure if this actually supports SVGs FengGUI Slick - looks promising but a bit of an overkill But couldn't decide what is best for me to start working with, and I hope someone here as experience with any of these doing similar things.

    Read the article

  • context.Scale() with non-aspect ratio preserving parameters screws effective lineWith

    - by rrenaud
    I am trying to apply some natural transformations whereby the x axis is remapped to some very small domain, like from 0 to 1, whereas y is remapped to some small, but substantially larger domain, like 0 to 30. This way, drawing code can be nice and clean and only care about the model space. However, if I apply a scale, then lines are also scaled, which means that horizontal lines become extremely fat relative to vertical ones. Here is some sample code. When natural_height is much less than natural_height, the picture doesn't look as intended. I want the picture to look like this, which is what happens with a scale that preserves aspect ratio. rftgstats.c om/canvas_good.png However, with a non-aspect ratio preserving scale, the results look like this. rftgstats.c om/canvas_bad.png <html><head><title>Busted example</title></head> <body> <canvas id=example height=300 width=300> <script> var canvas = document.getElementById('example'); var ctx = canvas.getContext('2d'); var natural_width = 10; var natural_height = 50; ctx.scale(canvas.width / natural_width, canvas.height / natural_height); var numLines = 20; ctx.beginPath(); for (var i = 0; i < numLines; ++i) { ctx.moveTo(natural_width / 2, natural_height / 2); var angle = 2 * Math.PI * i / numLines; // yay for screen size independent draw calls. ctx.lineTo(natural_width / 2 + natural_width * Math.cos(angle), natural_height / 2 + natural_height * Math.sin(angle)); } ctx.stroke(); ctx.closePath(); </script> </body> </html>

    Read the article

  • PHP -- automatic SQL injection protection?

    - by ashgromnies
    I took over maintenance of a PHP app recently and I'm not super familiar with PHP but some of the things I've been seeing on the site are making me nervous that it could be vulnerable to a SQL injection attack. For example, see how this code for logging into the administrative section works: $password = md5(HASH_SALT . $_POST['loginPass']); $query = "SELECT * FROM `administrators` WHERE `active`='1' AND `email`='{$_POST['loginEmail']}' AND `password`='{$password}'"; $userInfo = db_fetch_array(db_query($query)); if($userInfo['id']) { $_SESSION['adminLoggedIn'] = true; // user is logged in, other junk happens here, not important The creators of the site made a special db_query method and db_fetch_array method, shown here: function db_query($qstring,$print=0) { return @mysql(DB_NAME,$qstring); } function db_fetch_array($qhandle) { return @mysql_fetch_array($qhandle); } Now, this makes me think I should be able to do some sort of SQL injection attack with an email address like: ' OR 'x'='x' LIMIT 1; and some random password. When I use that on the command line, I get an administrative user back, but when I try it in the application, I get an invalid username/password error, like I should. Could there be some sort of global PHP configuration they have enabled to block these attacks? Where would that be configured? Here is the PHP --version information: # php --version PHP 5.2.12 (cli) (built: Feb 28 2010 15:59:21) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with the ionCube PHP Loader v3.3.14, Copyright (c) 2002-2010, by ionCube Ltd., and with Zend Optimizer v3.3.9, Copyright (c) 1998-2009, by Zend Technologies

    Read the article

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • Is there a term for this concept, and does it exist in a static-typed language?

    - by Strilanc
    Recently I started noticing a repetition in some of my code. Of course, once you notice a repetition, it becomes grating. Which is why I'm asking this question. The idea is this: sometimes you write different versions of the same class: a raw version, a locked version, a read-only facade version, etc. These are common things to do to a class, but the translations are highly mechanical. Surround all the methods with lock acquires/releases, etc. In a dynamic language, you could write a function which did this to an instance of a class (eg. iterate over all the functions, replacing them with a version which acquires/releases a lock.). I think a good term for what I mean is 'reflected class'. You create a transformation which takes a class, and returns a modified-in-a-desired-way class. Synchronization is the easiest case, but there are others: make a class immutable [wrap methods so they clone, mutate the clone, and include it in the result], make a class readonly [assuming you can identify mutating methods], make a class appear to work with type A instead of type B, etc. The important part is that, in theory, these transformations make sense at compile-time. Even though an ActorModel<T> has methods which change depending on T, they depend on T in a specific way knowable at compile-time (ActorModel<T> methods would return a future of the original result type). I'm just wondering if this has been implemented in a language, and what it's called.

    Read the article

  • Algorithm for generating an array of non-equal costs for a transport problem optimization

    - by Carlos
    I have an optimizer that solves a transportation problem, using a cost matrix of all the possible paths. The optimiser works fine, but if two of the costs are equal, the solution contains one more path that the minimum number of paths. (Think of it as load balancing routers; if two routes are same cost, you'll use them both.) I would like the minimum number of routes, and to do that I need a cost matrix that doesn't have two costs that are equal within a certain tolerance. At the moment, I'm passing the cost matrix through a baking function which tests every entry for equality to each of the other entries, and moves it a fixed percentage if it matches. However, this approach seems to require N^2 comparisons, and if the starting values are all the same, the last cost will be r^N bigger. (r is the arbitrary fixed percentage). Also there is the problem that by multiplying by the percentage, you end up on top of another value. So the problem seems to have an element of recursion, or at least repeated checking, which bloats the code. The current implementation is basically not very good (I won't paste my GOTO-using code here for you all to mock), and I'd like to improve it. Is there a name for what I'm after, and is there a standard implementation? Example: {1,1,2,3,4,5} (tol = 0.05) becomes {1,1.05,2,3,4,5}

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • Managing project configurations in VS 2010

    - by Toby
    I'm working on a solution with multiple projects (class libraries, interop, web application, etc) in VS2010. For the web application, I would like to take advantage of the config transformations in VS2010, so at one point I added configurations for each of our environments: Development, Test, Production, and so on. Some time later, after having rearranged the project layout, I noticed that some projects show all of the configurations in the properties page dropdown. Some projects (added since I did that setup) show only the standard Debug & Release configurations. Once I realized that this was going to make build configurations worse, not better, I decided to remove all of the extra configurations I had added. I've removed all of the various configuration options from the solution, but the projects that had the alternate configuration options still have them, and I can't figure out how to get rid of them in individual projects. Also, now that I see that not all projects have to have the same configurations, I would like to create my environmental configurations at the solution level, and in the web application project (for the config transforms), but leave all of the class libraries with the basic Debug/Release configurations. I've been unable to find any tool in the UI, or any information on the 'Net, concerning how to set up such a thing. So, in short, what's the best/easiest way to manage configurations at the project level in VS2010?

    Read the article

  • Big-O for GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is an arbitrary constants Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • MSSQL 2008 - Bit Param Evaluation alters Execution Plan

    - by Nathanial Woolls
    I have been working on migrating some of our data from Microsoft SQL Server 2000 to 2008. Among the usual hiccups and whatnot, I’ve run across something strange. Linked below is a SQL query that returns very quickly under 2000, but takes 20 minutes under 2008. I have read quite a bit on upgrading SQL server and went down the usual paths of checking indexes, statistics, etc. before coming to the conclusion that the following statement, found in the WHERE clause, causes the execution plan for the steps that follow this statement to change dramatically: And ( @bOnlyUnmatched = 0 -- offending line Or Not Exists( The SQL statements and execution plans are linked below. A coworker was able to rewrite a portion of the WHERE clause using a CASE statement, which seems to “trick” the optimizer into using a better execution plan. The version with the CASE statement is also contained in the linked archive. I’d like to see if someone has an explanation as to why this is happening and if there may be a more elegant solution than using a CASE statement. While we can work around this specific issue, I’d like to have a broader understanding of what is happening to ensure the rest of the migration is as painless as possible. Zip file with SQL statements and XML execution plans Thanks in advance!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25  | Next Page >