Search Results

Search found 708 results on 29 pages for 'intermediate'.

Page 24/29 | < Previous Page | 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • C++ Iterator Pipelining Designs

    - by Kirakun
    Suppose we want to apply a series of transformations, int f1(int), int f2(int), int f3(int), to a list of objects. A naive way would be SourceContainer source; TempContainer1 temp1; transform(source.begin(), source.end(), back_inserter(temp1), f1); TempContainer2 temp2; transform(temp1.begin(), temp1.end(), back_inserter(temp2), f2); TargetContainer target; transform(temp2.begin(), temp2.end(), back_inserter(target), f3); This first solution is not optimal because of the extra space requirement with temp1 and temp2. So, let's get smarter with this: int f123(int n) { return f3(f2(f1(n))); } ... SourceContainer source; TargetContainer target; transform(source.begin(), source.end(), back_inserter(target), f123); This second solution is much better because not only the code is simpler but more importantly there is less space requirement without the intermediate calculations. However, the composition f123 must be determined at compile time and thus is fixed at run time. How would I try to do this efficiently if the composition is to be determined at run time? For example, if this code was in a RPC service and the actual composition--which can be any permutation of f1, f2, and f3--is based on arguments from the RPC call.

    Read the article

  • C++ reference variable again!!!

    - by kumar_m_kiran
    Hi All, I think most would be surprised about the topic again, However I am referring to a book "C++ Common Knowledge: Essential Intermediate Programming" written by "Stephen C. Dewhurst". In the book, he quotes a particular sentence (in section under Item 5. References Are Aliases, Not Pointers), which is as below A reference is an alias for an object that already exists prior to the initialization of the reference. Once a reference is initialized to refer to a particular object, it cannot later be made to refer to a different object; a reference is bound to its initializer for its whole lifetime Can anyone please explain the context of "cannot later be made to refer to a different object" Below code works for me, #include <iostream> using namespace std; int main(int argc, char *argv[]) { int i = 100; int& ref = i; cout<<ref<<endl; int k = 2000; ref = k; cout<<ref<<endl; return 0; } Here I am referring the variable ref to both i and j variable. And the code works perfectly fine. Am I missing something? I have used SUSE10 64bit linux for testing my sample program. Thanks for your input in advance.

    Read the article

  • Why does Perl lose foreign characters on Windows; can this be fixed (if so, how)?

    - by Alex R
    Note below how ã changes to a. NOTE2: Before you blame this on CMD.EXE and Windows pipe weirdness, see Experiment 2 below which gets a similar problem using File::Find. The particular problem I'm trying to fix involves working with image files stored on a local drive, and manipulating the file names which may contain foreign characters. The two experiments shown below are intermediate debugging steps. The ã character is common in latin languages. e.g. http://pt.wikipedia.org/wiki/Cão Experiment 1 Experiment 2 To get around my particular problem, I tried using File::Find instead of piped input. The issue actually gets worse: Debugging update: I tried some of the tricks listed at http://perldoc.perl.org/perlunicode.html, e.g. use utf8, use feature 'unicode_strings', etc, to no avail. Environment and Version Info The OS is Windows 7, 64-bit. The Perl is: This is perl 5, version 12, subversion 2 (v5.12.2) built for MSWin32-x64-multi-thread (with 8 registered patches, see perl -V for more detail) Copyright 1987-2010, Larry Wall Binary build 1202 [293621] provided by ActiveState http://www.ActiveState.com Built Sep 6 2010 22:53:42

    Read the article

  • regular expression repeating subexpression

    - by Michael Z
    I have the following text <pattern name="pattern1"/> <success>success case 1</success> <failed> failure 1</failed> <failed> failure 2</failed> <unknown> unknown </unknown> <pattern name="pattern2"/> <success>success case 2</success> <otherTag>There are many other tags.</otherTag> <failed> failure 3</failed> And the regular expression <failed>[\w|\W]*?</failed> matches all the lines contains failed tag. What do I need to to if I want to include the lines contains pattern tag as well? Basically, I want the following output: <pattern name="pattern1"/> <failed> failure 1</failed> <failed> failure 2</failed> <pattern name="pattern2"/> <failed> failure 3</failed> I am doing this in javascript, I do not mind of doing some intermediate steps.

    Read the article

  • Does using functional languages help against computing values repeatedly?

    - by sharptooth
    Consider a function f(x,y): f(x,0) = x*x; f(0,y) = y*(y + 1); f(x,y) = f(x,y-1) + f(x-1,y); If one tries to implement that recursively in some language like C++ he will encounter a problem. Suppose the function is first called with x = x0 and y = y0. Then for any pair (x,y) where 0 <= x < x0 and 0 <= y < y0 the intermediate values will be computed multiple times - recursive calls will form a huge tree in which multiple leaves will in fact contain the same pairs (x,y). For pairs (x,y) where x and y are both close to 0 values will be computed numerous times. For instance, I tested a similar function implemented in C++ - for x=20 and y=20 its computation takes about 4 hours (yes, four Earth hours!). Obviously the implementation can be rewritten in such way that repeated computation doesn't occur - either iteratively or with a cache table. The question is: will functional languages perform any better and avoid repeated computations when implementing a function like above recursively?

    Read the article

  • "pseudo-atomic" operations in C++

    - by dan
    So I'm aware that nothing is atomic in C++. But I'm trying to figure out if there are any "pseudo-atomic" assumptions I can make. The reason is that I want to avoid using mutexes in some simple situations where I only need very weak guarantees. 1) Suppose I have globally defined volatile bool b, which initially I set true. Then I launch a thread which executes a loop while(b) doSomething(); Meanwhile, in another thread, I execute b=true. Can I assume that the first thread will continue to execute? In other words, if b starts out as true, and the first thread checks the value of b at the same time as the second thread assigns b=true, can I assume that the first thread will read the value of b as true? Or is it possible that at some intermediate point of the assignment b=true, the value of b might be read as false? 2) Now suppose that b is initially false. Then the first thread executes bool b1=b; bool b2=b; if(b1 && !b2) bad(); while the second thread executes b=true. Can I assume that bad() never gets called? 3) What about an int or other builtin types: suppose I have volatile int i, which is initially (say) 7, and then I assign i=7. Can I assume that, at any time during this operation, from any thread, the value of i will be equal to 7? 4) I have volatile int i=7, and then I execute i++ from some thread, and all other threads only read the value of i. Can I assume that i never has any value, in any thread, except for either 7 or 8? 5) I have volatile int i, from one thread I execute i=7, and from another I execute i=8. Afterwards, is i guaranteed to be either 7 or 8 (or whatever two values I have chosen to assign)?

    Read the article

  • Encode_JSON Errors in Lasso 8.6.2 After Period of Time

    - by ATP_JD
    We are in the process of converting apps from Lasso 8 to Lasso 9, and as an intermediate step, have upgraded from 8.5.5 to 8.6.2 (which runs alongside 9 on our new box, in different virtual hosts). I am finding that with 8.6.2 we are getting a slew of errors on pages that call encode_json. The weird thing with these errors is that they don't start happening until some period of time after the site starts. Then, some hours later, all encode_json calls begin to fail with error messages like this: An error occurred while processing your request. Error Information Error Message: No tag, type or constant was defined under the name "?????????????????" with arguments: array: (pair: (-find)=([\x{0020}-\x{21}\x{23}-\x{5b}\x{5d}-\x{10fff}])), (r) at: onCompare with params: 'r' at: JSON with params: 'reload', -Options=array: (-Internal) at: JSON with params: @map: (reload)=(false), (tcstring)=(LZU), (timestring)=(10:42 AM&nbsp;&nbsp;&nbsp;1442Z) at: [...].lasso with params: 'pageloadtime'='1383038310' on line: 31 at position: 1 Error Code: -9948 (Yes, those Chinese(?) characters are in the error message.) I have removed the 8.5.5 encode_json tag from LassoStartup, so we are using the correct built-in method. The encode_json method fails for any and all parameters I throw at it from simple strings to arrays of maps. Upon restarting the site, encode_json resumes working for an hour or two, seemingly depending on load. On 8.5.5, we don't have this problem. Does anyone have experience with this issue? Any advice regarding trying the 8.5.5 tag swap encode_json to see if I can override the built-in method? Maybe it will work better? Thanks in advance for your time and assistance. -Justin

    Read the article

  • Is there a Python module for handling Python object addresses?

    - by cool-RR
    (When I say "object address", I mean the string that you type in Python to access an object. For example 'life.State.step'. Most of the time, all the objects before the last dot will be packages/modules, but in some cases they can be classes or other objects.) In my Python project I often have the need to play around with object addresses. Some tasks that I have to do: Given an object, get its address. Given an address, get the object, importing any needed modules on the way. Shorten an object's address by getting rid of redundant intermediate modules. (For example, 'life.life.State.step' may be the official address of an object, but if 'life.State.step' points at the same object, I'd want to use it instead because it's shorter.) Shorten an object's address by "rooting" a specified module. (For example, 'garlicsim_lib.simpacks.prisoner.prisoner.State.step' may be the official address of an object, but I assume that the user knows where the prisoner package is, so I'd want to use 'prisoner.prisoner.State.step' as the address.) Is there a module/framework that handles things like that? I wrote a few utility modules to do these things, but if someone has already written a more mature module that does this, I'd prefer to use that. One note: Please, don't try to show me a quick implementation of these things. It's more complicated than it seems, there are plenty of gotchas, and any quick-n-dirty code will probably fail for many important cases. These kind of tasks call for battle-tested code. UPDATE: When I say "object", I mostly mean classes, modules, functions, methods, stuff like these. Sorry for not making this clear before.

    Read the article

  • Learning Java and logic using debugger. Did I cheat?

    - by centr0
    After a break from coding in general, my way of thinking logically faded (as if it was there to begin with...). I'm no master programmer. Intermediate at best. I decided to see if i can write an algorithm to print out the fibonacci sequence in Java. I got really frustrated because it was something so simple, and used the debugger to see what was going on with my variables. solved it in less than a minute with the help of the debugger. Is this cheating? When I read code either from a book or someone else's, I now find that it takes me a little more time to understand. If the alghorithm is complex (to me) i end up writing notes as to whats going on in the loop. A primitive debugger if you will. When you other programmers read code, do you also need to write things down as to whats the code doing? Or are you a genius and and just retain it?

    Read the article

  • How to calculate the y-pixels of someones weight on a graph? (math+programming question)

    - by RexOnRoids
    I'm not that smart like some of you geniuses. I need some help from a math whiz. My app draws a graph of the users weight over time. I need a surefire way to always get the right pixel position to draw the weight point at for a given weight. For example, say I want to plot the weight 80.0(kg) on the graph when the range of weights is 80.0 to 40.0kg. I want to be able to plug in the weight (given I know the highest and lowest weights in the range also) and get the pixel result 400(y) (for the top of the graph). The graph is 300 pixels high (starts at 100 and ends at 400). The highest weight 80kg would be plot at 400 while the lowest weight 40kg would be plot at 100. And the intermediate weights should be plotted appropriately. I tried this but it does not work: -(float)weightToPixel:(float)theWeight { float graphMaxY = 400; //The TOP of the graph float graphMinY = 100; //The BOTTOM of the graph float yOffset = 100; //Graph itself is offset 100 pixels in the Y direction float coordDiff = graphMaxY-graphMinY; //The size in pixels of the graph float weightDiff = self.highestWeight-self.lowestWeight; //The weight gap float pixelIncrement = coordDiff/weightDiff; float weightY = (theWeight*pixelIncrement)-(coordDiff-yOffset); //The return value return weightYpixel; }

    Read the article

  • Calculating with a variable outside of its bounds in C

    - by aquanar
    If I make a calculation with a variable where an intermediate part of the calculation goes higher then the bounds of that variable type, is there any hazard that some platforms may not like? This is an example of what I'm asking: int a, b; a=30000; b=(a*32000)/32767; I have compiled this, and it does give the correct answer of 29297 (well, within truncating error, anyway). But the part that worries me is that 30,000*32,000 = 960,000,000, which is a 30-bit number, and thus cannot be stored in a 16-bit int. The end result is well within the bounds of an int, but I was expecting that whatever working part of memory would have the same size allocated as the largest source variables did, so an overflow error would occur. This is just a small example to show my problem, I am trying to avoid using floating points by making the fraction be a fraction of the max amount able to be stored in that variable (in this case, a signed integer, so 32767 on the positive side), because the embedded system I'm using I believe does not have an FPU. So how do most processors handle calculations out of the bounds of the source and destination variables?

    Read the article

  • How to insert rows in a many-to-many relationship

    - by GSound
    Hello, I am having an issue trying to save into an intermediate table. I am new on Rails and I have spent a couple of hours on this but can't make it work, maybe I am doing wrong the whole thing. Any help will be appreciated. =) The app is a simple book store, where a logged-in user picks books and then create an order. This error is displayed: NameError in OrderController#create uninitialized constant Order::Orderlist These are my models: class Book < ActiveRecord::Base has_many :orderlists has_many :orders, :through => :orderlists end class Order < ActiveRecord::Base belongs_to :user has_many :orderlists has_many :books, :through => :orderlists end class OrderList < ActiveRecord::Base belongs_to :book belongs_to :order end This is my Order controller: class OrderController < ApplicationController def add if session[:user] book = Book.find(:first, :conditions => ["id = #{params[:id]}"]) if book session[:list].push(book) end redirect_to :controller => "book" else redirect_to :controller => "user" end end def create if session[:user] @order = Order.new if @order.save session[:list].each do |b| @order.orderlists.create(:book => b) # <-- here is my prob I cant make it work end end end redirect_to :controller => "book" end end Thnx in advance! Manuel

    Read the article

  • USB software protection dongle for Java with an SDK which is cross-platform “for real”. Does it exist?

    - by Unai Vivi
    What I'd like to ask is if anybody knows about an hardware USB-dongle for software protection which offers a very complete out-of-the-box API support for cross-platform Java deployments. Its SDK should provide a jar (only one, not one different library per OS & bitness) ready to be added to one's project as a library. The jar should contain all the native stuff for the various OSes and bitnesses From the application's point of view, one should continue to write (api calls) once and run everywhere, without having to care where the end-user will run the software The provided jar should itself deal with loading the appropriate native library Does such a thing exist? With what I've tried so far, you have different APIs and compiled libraries for win32, linux32, win64, linux64, etc (or you even have to compile stuff yourself on the target machine), but hey, we're doing Java here, we don't know (and don't care) where the program will run! And we can't expect the end-user to be a software engineer, tweak (and break!) its linux server, link libraries, mess with gcc, litter the filesystem, etc... In general, Java support (in a transparent cross-platform fashion) is quite bad with the dongle SDKs I've evaluated so far (e.g. KeyLok and SecuTech's UniKey). I even purchased (no free evaluation kit available) SecureMetric SDKs&dongles (they should've been "soooo" straighforward to integrate -- according to marketing material :\ ) and they were the worst ever: SecureDongle X has no 64bit support and SecureDongle SD is not cross-platform at all. So, has anyone out there been through this and found the ultimate Java security usb dongle for cross-platform deployments? Note: software is low-volume, high-value; application is off-line (intranet with no internet access), so no online-activation alternatives and the like. -- EDIT Tried out HASP dongles (used to be called "Aladdin"), and added them to the no-no list: here, too, there is no out-of-the-box (out-of-the-jar) support: e.g. end-linux-user has to manually put the .so library (the specific file for the appropriate bitness) in the right place on his filesystem, and export an env. variable accordingly. -- EDIT 2 I really don't understand all the negativity and all the downvoting: is this a taboo topic? Is it so hard to understand that a freelance developer has to put food on the table everyday to feed its family and pay the bills at the end of the month? Please don't talk about "adding value" as a supplier, because that'd be off-topic. Furthermore I'm not in direct contact with end-customers, but there's an intermediate reselling entity: it's this entity I want to prevent selling copies of the software without sharing the revenue. -- EDIT 3 I'd like to emphasize the fact that the question is looking for a technical answer, not one about opinions concerning business models, philosophical lucubrations on the concept of value, resellers' reliability, etc. I cannot change resellers, because this isn't a "general purpose" kind of sw, but a very vertical one and (for some reasons it's not worth explaining here) I must go through them. I just need to prevent the "we sold 2 copies, here's your share [bwahaha we sold 10]" scenario.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 22 (sys.dm_db_index_physical_stats)

    - by Tamarick Hill
    The sys.dm_db_index_physical_stats Dynamic Management Function is used to return information about the fragmentation levels, page counts, depth, number of levels, record counts, etc. about the indexes on your database instance. One row is returned for each level in a given index, which we will discuss more later. The function takes a total of 5 input parameters which are (1) database_id, (2) object_id, (3) index_id, (4) partition_number, and (5) the mode of the scan level that you would like to run. Let’s use this function with our AdventureWorks2012 database to better illustrate the information it provides. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, NULL) As you can see from the result set, there is a lot of beneficial information returned from this DMF. The first couple of columns in the result set (database_id, object_id, index_id, partition_number, index_type_desc, alloc_unit_type_desc) are either self-explanatory or have been explained in our previous blog sessions so I will not go into detail about these at this time. The next column in the result set is the index_depth which represents how deep the index goes. For example, If we have a large index that contains 1 root page, 3 intermediate levels, and 1 leaf level, our index depth would be 5. The next column is the index_level which refers to what level (of the depth) a particular row is referring to. Next is probably one of the most beneficial columns in this result set, which is the avg_fragmentation_in_percent. This column shows you how fragmented a particular level of an index may be. Many people use this column within their index maintenance jobs to dynamically determine whether they should do REORG’s or full REBUILD’s of a given index. The fragment count represents the number of fragments in a leaf level while the avg_fragment_size_in_pages represents the number of pages in a fragment. The page_count column tells you how many pages are in a particular index level. From my result set above, you see the the remaining columns all have NULL values. This is because I did not specify a ‘mode’ in my query and as a result it used the ‘LIMITED’ mode by default. The LIMITED mode is meant to be lightweight so it does collect information for every column in the result set. I will re-run my query again using the ‘DETAILED’ mode and you will see we now have results for these rows. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, ‘DETAILED’)   From the remaining columns, you see we get even more detailed information such as how many records are in a particular index level (record_count). We have a column for ghost_record_count which represents the number of records that have been marked for deletion, but have not physically been removed by the background ghost cleanup process. We later see information on the MIN, MAX, and AVG record size in bytes. The forwarded_record_count column refers to records that have been updated and now no longer fit within the row on the page anymore and thus have to be moved. A forwarded record is left in the original location with a pointer to the new location. The last column in the result set is the compressed_page_count column which tells you how many pages in your index have been compressed. This is a very powerful DMF that returns good information about the current indexes in your system. However, based on the mode you select, it could be a very resource intensive function so be careful with how you use it. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188917.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Am I right about the differences between Floyd-Warshall, Dijkstra's and Bellman-Ford algorithms?

    - by Programming Noob
    I've been studying the three and I'm stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you. Dijkstra's algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this Floyd-Warshall's algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles (this is the most important one. I mean, this is the one I'm least sure about:) 3.Bellman-Ford is used like Dijkstra's, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall's except for one source, right? If you need to have a look, the corresponding algorithms are (courtesy Wikipedia): Bellman-Ford: procedure BellmanFord(list vertices, list edges, vertex source) // This implementation takes in a graph, represented as lists of vertices // and edges, and modifies the vertices so that their distance and // predecessor attributes store the shortest paths. // Step 1: initialize graph for each vertex v in vertices: if v is source then v.distance := 0 else v.distance := infinity v.predecessor := null // Step 2: relax edges repeatedly for i from 1 to size(vertices)-1: for each edge uv in edges: // uv is the edge from u to v u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: v.distance := u.distance + uv.weight v.predecessor := u // Step 3: check for negative-weight cycles for each edge uv in edges: u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: error "Graph contains a negative-weight cycle" Dijkstra: 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity ; // Unknown distance function from 4 // source to v 5 previous[v] := undefined ; // Previous node in optimal path 6 // from source 7 8 dist[source] := 0 ; // Distance from source to source 9 Q := the set of all nodes in Graph ; // All nodes in the graph are 10 // unoptimized - thus are in Q 11 while Q is not empty: // The main loop 12 u := vertex in Q with smallest distance in dist[] ; // Start node in first case 13 if dist[u] = infinity: 14 break ; // all remaining vertices are 15 // inaccessible from source 16 17 remove u from Q ; 18 for each neighbor v of u: // where v has not yet been 19 removed from Q. 20 alt := dist[u] + dist_between(u, v) ; 21 if alt < dist[v]: // Relax (u,v,a) 22 dist[v] := alt ; 23 previous[v] := u ; 24 decrease-key v in Q; // Reorder v in the Queue 25 return dist; Floyd-Warshall: 1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j 2 (infinity if there is none). 3 Also assume that n is the number of vertices and edgeCost(i,i) = 0 4 */ 5 6 int path[][]; 7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path 8 from i to j using intermediate vertices (1..k-1). Each path[i][j] is initialized to 9 edgeCost(i,j). 10 */ 11 12 procedure FloydWarshall () 13 for k := 1 to n 14 for i := 1 to n 15 for j := 1 to n 16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );

    Read the article

  • How to convert from amateur web app developer to professional web apper?

    - by Nilesh
    This is more of a practical question on web app development and deployment process. Here is some background information. I use PHP for server side scripting, javascript for client side. I use Netbeans and notepad++. I user Firefox and firebug for debugging and testing. The process I use is very amateurish, I code something in netbeans, something in notepad++ and since there is nothing to compile, I just refresh the firefox browser and test it. This is convenient and faster compared to the Java development enviornment where you would have to atleast compile and deploy the jar files before you could run them. I have been thinking of putting a formal process in my development and find it hard putting it together. There are so many things to do before you can deploy your final web app. I keep hearing jslint, compression, unit testing (selenium), Ant, YUI compressor etc but I am now looking for some steps that I can take to make me more organized. For e.g I use netbeans but don't use any projects within it. I directly update the files. I don't use any source control but use my Iomega backup that saves each save into a different version and at the end of the day I backup the dev directory to my Amazon s3 account. For me development environment is just a DEV directory, TEST is my intermediate stage and PROD is the final directory that gets pushed out to the server. But all these directories are in the same apache home. I have few php scripts that just copies the needed files into the production directory. Thats about it for my development approach. I know I am missing the following - Regression testing (manual or automated ??) - automated testing (selenium ??) - automated deployment (ANT ??) - source control (svn ??) - quality control (jslint ??) Can someone explain what are the missing steps and how to go about filling those steps in order to have more professional approach. I am looking for tools with example tutorials in streamlining the whole development to deployment stage. For me just getting a hang of database, server side and client side development all in synchronization was itself a huge accomplishment. And now I feel there is lot missing before you can produce quality web application. For e.g I see lot of mention about using automated testing but how to put in use with respect to javascript and php. How to use ANT for the deployment etc. Is this all too much for a single or two person development team? Is there a way to automate all the above so that I just keep coding in netbeans and then run a batch file that is configured once and run it everytime to produce the code in the production directory? Lot of these information is scattered on the web and here, if someone can guide I would be happy to consolidate here. Thank you for your patience :)

    Read the article

  • Why SQL Developer Rocks for the Advanced User Too

    - by thatjeffsmith
    While SQL Developer may be ‘perfect for Oracle beginners,’ that doesn’t preclude advanced and intermediate users from getting their fair share of toys! I’ve been working with Oracle since the 7.3.4 days, and I think it’s pretty safe to say that the WAY an ‘old timer’ uses a tool like SQL Developer is radically different than the ‘beginner.’ If you’ve been reluctant to use SQL Developer because it’s a GUI, give me a few minutes to try to convince you it’s worth a second (or third) look. 1. Help when you want it, and only when you want it One of the biggest gripes any user has with a piece of software is when said software can’t get out of it’s own way. When you’re typing in a word processor, sometimes you can do without the grammar and spelling checks, the offer to auto-complete your words, and all of the additional mark-up. This drives folks to programs like Notepad++ and vi. You can disable the code insight feature so you can type unmolested by SQL Developer’s attempt to auto-complete your object names. Now, if you happen to come across a long or hard to spell object name, you can still invoke the feature on demand using Ctrl+Spacebar Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) 2. Automatic File Tracking SQL*Minus is nice. Vi is cool. Notepad++ has a lot of features I like. But not too many editors offer automatic logging of changes to your files without having to setup a source control system. I was doing some work on my login.sql. I’m not doing anything crazy, but seeing what I had done in previous iterations was helpful. Now imagine how nice it would be to have this available for your l,000+ line scripts! Track your scripts as they change, no setup required! 3. Extend the Functionality Know SQL and XML? Wish SQL Developer did JUST a little bit more? Build your own extensions. You can have custom context menus and object pages in just a few minutes. This is an example of lazy developers writing code that write code. 4. Get Your Money’s Worth You’ve licensed Enterprise Edition. You got your Diagnostic and Tuning packs. Now start using them! Not everyone has access to Enterprise Manager, especially developers. But that doesn’t mean they don’t need help with troubleshooting and optimizing poorly performing SQL statements. ASH, AWR, Real-Time SQL Monitoring and the SQL Tuning Advisor are built into the Reports and Worksheet. Yes you could make the package calls, but that’s a whole lot of typing, and I’d rather just get to the results. 5. Profile, Debug, & Unit Testing PLSQL An Interactive Development Environment (IDE) built by the same folks that own the programming language (Hello – Oracle PLSQL!) should be complete. It should ‘hug’ the developer and empower them to churn out programs that work, run fast, and are easy to maintain. Write it, test it, debug it, and tune it. When you’re running your programs and you just want to see the data that’s returned, that shouldn’t require any special settings or workaround to make it happen either. Magic! And a whole lot more… I could go on and talk about the support for things like DataPump, RMAN, and DBMS_SCHEDULER, but you’re experts and you’re plenty busy. If you think SQL Developer is falling short somewhere, I want you to let us know about it.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Weindows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review-again.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Windows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Simple task framework - building software from reusable pieces

    - by RuslanD
    I'm writing a web service with several APIs, and they will be sharing some of the implementation code. In order not to copy-paste, I would like to ideally implement each API call as a series of tasks, which are executed in a sequence determined by the business logic. One obvious question is whether that's the best strategy for code reuse, or whether I can look at it in a different way. But assuming I want to go with tasks, several issues arise: What's a good task interface to use? How do I pass data computed in one task to another task in the sequence that might need it? In the past, I've worked with task interfaces like: interface Task<T, U> { U execute(T input); } Then I also had sort of a "task context" object which had getters and setters for any kind of data my tasks needed to produce or consume, and it gets passed to all tasks. I'm aware that this suffers from a host of problems. So I wanted to figure out a better way to implement it this time around. My current idea is to have a TaskContext object which is a type-safe heterogeneous container (as described in Effective Java). Each task can ask for an item from this container (task input), or add an item to the container (task output). That way, tasks don't need to know about each other directly, and I don't have to write a class with dozens of methods for each data item. There are, however, several drawbacks: Each item in this TaskContext container should be a complex type that wraps around the actual item data. If task A uses a String for some purpose, and task B uses a String for something entirely different, then just storing a mapping between String.class and some object doesn't work for both tasks. The other reason is that I can't use that kind of container for generic collections directly, so they need to be wrapped in another object. This means that, based on how many tasks I define, I would need to also define a number of classes for the task items that may be consumed or produced, which may lead to code bloat and duplication. For instance, if a task takes some Long value as input and produces another Long value as output, I would have to have two classes that simply wrap around a Long, which IMO can spiral out of control pretty quickly as the codebase evolves. I briefly looked at workflow engine libraries, but they kind of seem like a heavy hammer for this particular nail. How would you go about writing a simple task framework with the following requirements: Tasks should be as self-contained as possible, so they can be composed in different ways to create different workflows. That being said, some tasks may perform expensive computations that are prerequisites for other tasks. We want to have a way of storing the results of intermediate computations done by tasks so that other tasks can use those results for free. The task framework should be light, i.e. growing the code doesn't involve introducing many new types just to plug into the framework.

    Read the article

  • JMSContext, @JMSDestinationDefintion, DefaultJMSConnectionFactory with simplified JMS API: TOTD #213

    - by arungupta
    "What's New in JMS 2.0" Part 1 and Part 2 provide comprehensive introduction to new messaging features introduced in JMS 2.0. The biggest improvement in JMS 2.0 is introduction of the "new simplified API". This was explained in the Java EE 7 Launch Technical Keynote. You can watch a complete replay here. Sending and Receiving a JMS message using JMS 1.1 requires lot of boilerplate code, primarily because the API was designed 10+ years ago. Here is a code that shows how to send a message using JMS 1.1 API: @Statelesspublic class ClassicMessageSender { @Resource(lookup = "java:comp/DefaultJMSConnectionFactory") ConnectionFactory connectionFactory; @Resource(mappedName = "java:global/jms/myQueue") Queue demoQueue; public void sendMessage(String payload) { Connection connection = null; try { connection = connectionFactory.createConnection(); connection.start(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); MessageProducer messageProducer = session.createProducer(demoQueue); TextMessage textMessage = session.createTextMessage(payload); messageProducer.send(textMessage); } catch (JMSException ex) { ex.printStackTrace(); } finally { if (connection != null) { try { connection.close(); } catch (JMSException ex) { ex.printStackTrace(); } } } }} There are several issues with this code: A JMS ConnectionFactory needs to be created in a application server-specific way before this application can run. Application-specific destination needs to be created in an application server-specific way before this application can run. Several intermediate objects need to be created to honor the JMS 1.1 API, e.g. ConnectionFactory -> Connection -> Session -> MessageProducer -> TextMessage. Everything is a checked exception and so try/catch block must be specified. Connection need to be explicitly started and closed, and that bloats even the finally block. The new JMS 2.0 simplified API code looks like: @Statelesspublic class SimplifiedMessageSender { @Inject JMSContext context; @Resource(mappedName="java:global/jms/myQueue") Queue myQueue; public void sendMessage(String message) { context.createProducer().send(myQueue, message); }} The code is significantly improved from the previous version in the following ways: The JMSContext interface combines in a single object the functionality of both the Connection and the Session in the earlier JMS APIs.  You can obtain a JMSContext object by simply injecting it with the @Inject annotation.  No need to explicitly specify a ConnectionFactory. A default ConnectionFactory under the JNDI name of java:comp/DefaultJMSConnectionFactory is used if no explicit ConnectionFactory is specified. The destination can be easily created using newly introduced @JMSDestinationDefinition as: @JMSDestinationDefinition(name = "java:global/jms/myQueue",        interfaceName = "javax.jms.Queue") It can be specified on any Java EE component and the destination is created during deployment. JMSContext, Session, Connection, JMSProducer and JMSConsumer objects are now AutoCloseable. This means that these resources are automatically closed when they go out of scope. This also obviates the need to explicitly start the connection JMSException is now a runtime exception. Method chaining on JMSProducers allows to use builder patterns. No need to create separate Message object, you can specify the message body as an argument to the send() method instead. Want to try this code ? Download source code! Download Java EE 7 SDK and install. Start GlassFish: bin/asadmin start-domain Build the WAR (in the unzipped source code directory): mvn package Deploy the WAR: bin/asadmin deploy <source-code>/jms/target/jms-1.0-SNAPSHOT.war And access the application at http://localhost:8080/jms-1.0-SNAPSHOT/index.jsp to send and receive a message using classic and simplified API. A replay of JMS 2.0 session from Java EE 7 Launch Webinar provides complete details on what's new in this specification: Enjoy!

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • export web page data to excel using javascript [on hold]

    - by Sreevani sri
    I have created web page using html.When i clicked on submit button it will export to excel. using javascript i wnt to export thadt data to excel. my html code is 1. Please give your Name:<input type="text" name="Name" /><br /> 2. Area where you reside:<input type="text" name="Res" /><br /> 3. Specify your age group<br /> (a)15-25<input type="text" name="age" /> (b)26-35<input type="text" name="age" /> (c)36-45<input type="text" name="age" /> (d) Above 46<input type="text" name="age" /><br /> 4. Specify your occupation<br /> (a) Student<input type="checkbox" name="occ" value="student" /> (b) Home maker<input type="checkbox" name="occ" value="home" /> (c) Employee<input type="checkbox" name="occ" value="emp" /> (d) Businesswoman <input type="checkbox" name="occ" value="buss" /> (e) Retired<input type="checkbox" name="occ" value="retired" /> (f) others (please specify)<input type="text" name="others" /><br /> 5. Specify the nature of your family<br /> (a) Joint family<input type="checkbox" name="family" value="jfamily" /> (b) Nuclear family<input type="checkbox" name="family" value="nfamily" /><br /> 6. Please give the Number of female members in your family and their average age approximately<br /> Members Age 1 2 3 4 5<br /> 8. Please give your highest level of education (a)SSC or below<input type="checkbox" name="edu" value="ssc" /> (b) Intermediate<input type="checkbox" name="edu" value="int" /> (c) Diploma <input type="checkbox" name="edu" value="dip" /> (d)UG degree <input type="checkbox" name="edu" value="deg" /> (e) PG <input type="checkbox" name="edu" value="pg" /> (g) Doctorial degree<input type="checkbox" name="edu" value="doc" /><br /> 9. Specify your monthly income approximately in RS <input type="text" name="income" /><br /> 10. Specify your time spent in making a purchase decision at the outlet<br /> (a)0-15 min <input type="checkbox" name="dis" value="0-15 min" /> (b)16-30 min <input type="checkbox" name="dis" value="16-30 min" /> (c) 30-45 min<input type="checkbox" name="dis" value="30-45 min" /> (d) 46-60 min<input type="checkbox" name="dis" value="46-60 min" /><br /> <input type="submit" onclick="exportToExcel()" value="Submit" /> </div> </form>

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Move a sphere along the swipe?

    - by gameOne
    I am trying to get a sphere curl based on the swipe. I know this has been asked many times, but still it's yearning to be answered. I have managed to add force on the direction of the swipe and it works near perfect. I also have all the swipe positions stored in a list. Now I would like to know how can the curl be achieved. I believe the the curve in the swipe can be calculated by the Vector dot product If theta is 0, then there is no need to add the swipe. If it is not, then add the curl. Maybe this condition is redundant if I managed to find how to curl the sphere along the swipe position The code that adds the force to sphere based on the swipe direction is as below: using UnityEngine; using System.Collections; using System.Collections.Generic; public class SwipeControl : MonoBehaviour { //First establish some variables private Vector3 fp; //First finger position private Vector3 lp; //Last finger position private Vector3 ip; //some intermediate finger position private float dragDistance; //Distance needed for a swipe to register public float power; private Vector3 footballPos; private bool canShoot = true; private float factor = 40f; private List<Vector3> touchPositions = new List<Vector3>(); void Start(){ dragDistance = Screen.height*20/100; Physics.gravity = new Vector3(0, -20, 0); footballPos = transform.position; } // Update is called once per frame void Update() { //Examine the touch inputs foreach (Touch touch in Input.touches) { /*if (touch.phase == TouchPhase.Began) { fp = touch.position; lp = touch.position; }*/ if (touch.phase == TouchPhase.Moved) { touchPositions.Add(touch.position); } if (touch.phase == TouchPhase.Ended) { fp = touchPositions[0]; lp = touchPositions[touchPositions.Count-1]; ip = touchPositions[touchPositions.Count/2]; //First check if it's actually a drag if (Mathf.Abs(lp.x - fp.x) > dragDistance || Mathf.Abs(lp.y - fp.y) > dragDistance) { //It's a drag //Now check what direction the drag was //First check which axis if (Mathf.Abs(lp.x - fp.x) > Mathf.Abs(lp.y - fp.y)) { //If the horizontal movement is greater than the vertical movement... if ((lp.x>fp.x) && canShoot) //If the movement was to the right) { //Right move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("right "+(lp.x-fp.x));//MOVE RIGHT CODE HERE canShoot = false; //rigidbody.AddForce((new Vector3((lp.x-fp.x)/30,10,16))*power); StartCoroutine(ReturnBall()); } else { //Left move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("left "+(lp.x-fp.x));//MOVE LEFT CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } } else { //the vertical movement is greater than the horizontal movement if (lp.y>fp.y) //If the movement was up { //Up move float y = (lp.y-fp.y)/Screen.height*factor; float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,y,16))*power); Debug.Log("up "+(lp.x-fp.x));//MOVE UP CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } else { //Down move Debug.Log("down "+lp+" "+fp);//MOVE DOWN CODE HERE } } } else { //It's a tap Debug.Log("none");//TAP CODE HERE } } } } IEnumerator ReturnBall() { yield return new WaitForSeconds(5.0f); rigidbody.velocity = Vector3.zero; rigidbody.angularVelocity = Vector3.zero; transform.position = footballPos; canShoot =true; isKicked = false; } }

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29  | Next Page >