Search Results

Search found 28744 results on 1150 pages for 'higher order functions'.

Page 158/1150 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • R: disentangling scopes

    - by rescdsk
    Hi, Right now, in my R project, I have functions1.R with doFoo() and doBar(), functions2.R with other functions, and main.R with the main program in it, which first does source('functions1.R'); source('functions2.R'), and then calls the other functions. I've been starting the program from the R GUI in Mac OS X, with source('main.R'). This is fine the first time, but after that, the variables that were defined the first time through the program are defined for the second time functions*.R are sourced, and so the functions get a whole bunch of extra variables defined. I don't want that! I want an "undefined variable" error when my function uses a variable it shouldn't! Twice this has given me very late nights of debugging! So how do other people deal with this sort of problem? Is there something like source(), but that makes an independent namespace that doesn't fall through to the main one? Making a package seems like one solution, but it seems like a big pain in the butt compared to e.g. Python, where a source file is automatically a separate namespace. Any tips? Thank you!

    Read the article

  • Is there a good way of automatically generating javascript client code from server side python

    - by tat.wright
    I basically want to be able to: Write a few functions in python (with the minimum amount of extra meta data) Turn these functions into a web service (with the minimum of effort / boiler plate) Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments) Example python: def hello_world(): return "Hello world" javascript: ... <!-- This file is automatically generated (either dynamically or statically) --> <script src="http://myurl.com/webservice/client_side_javascript"> </script> ... <script> $('#button').click(function () { hello_world(function (data){ $('#label').text(data))) } </script> A bit of research has shown me some approaches that come close to this: Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell) Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this) But are there any approaches closer to what I want?

    Read the article

  • Problem with load testing Web Service - VSTS 2008

    - by Carlos
    Hello, I have a webtest with makes a simple call to a WebService which looks like that: MyWebService webService = new MyWebService(); webService.Timeout = 180000; webService.myMethod(); I am not using ThinkTimes, also the Run Duration is set to 5 minutes. When I ran this test simulating only 1 user, I check the counters and I found something like that: Tests Total: 4500 Network Interface\Bytes sent (agent machine): 35,500 Then I ran the same tests, but this time simulating 2 users and I got something like that: Tests Total: 2225 Network Interface\Bytes sent (agent machine): 30,500 So when I increased the numbers of users the tests/sec was half than when I use only 1 user and the bytes sent by the agent was also lower. I think it is strange, because it doesn't seems I have a bottleneck in my agent machine since CPU is never higher than 30% and I have over 1.5GB of RAM free, also my network utilization is like 0.5% of its capacity. In order to troubleshot this I ran a test using Step Pattern, the simulated users went from 20 to 800 users. When I check the requests/sec it is practically constant through the whole test, so it is clear there is something in my test or my environment which is preventing the number of requests from gets higher. It would be a expected behavior if the "response time" was getting higher because it would tell me the requests wasn't been processed properly, but the strange thing is the response time is practically constant all the time and it is pretty low actually. I have no idea why my agent can't send more requests when I increase the numbers of users, any help/tip/guess would be really appreciate.

    Read the article

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

  • Is it normal for C++ static initialization to appear twice in the same backtrace?

    - by Joseph Garvin
    I'm trying to debug a C++ program compiled with GCC that freezes at startup. GCC mutex protects function's static local variables, and it appears that waiting to acquire such a lock is why it freezes. How this happens is rather confusing. First module A's static initialization occurs (there are __static_init functions GCC invokes that are visible in the backtrace), which calls a function Foo(), that has a static local variable. The static local variable is an object who's constructor calls through several layers of functions, then suddenly the backtrace has a few ??'s, and then it's is in the static initialization of a second module B (the __static functions occur all over again), which then calls Foo(), but since Foo() never returned the first time the mutex on the local static variable is still set, and it locks. How can one static init trigger another? My first theory was shared libraries -- that module A would be calling some function in module B that would cause module B to load, thus triggering B's static init, but that doesn't appear to be the case. Module A doesn't use module B at all. So I have a second (and horrifying) guess. Say that: Module A uses some templated function or a function in a templated class, e.g. foo<int>::bar() Module B also uses foo<int>::bar() Module A doesn't depend on module B at all At link time, the linker has two instances of foo<int>::bar(), but this is OK because template functions are marked as weak symbols... At runtime, module A calls foo<int>::bar, and the static init of module B is triggered, even though module B doesn't depend on module A! Why? Because the linker decided to go with module B's instance of foo::bar instead of module A's instance at link time. Is this particular scenario valid? Or should one module's static init never trigger static init in another module?

    Read the article

  • Do complex JOINs cause high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Using Tcl DSL in Python

    - by Sridhar Ratnakumar
    I have a bunch of Python functions. Let's call them foo, bar and baz. They accept variable number of string arguments and does other sophisticated things (like accessing the network). I want the "user" (let's assume he is only familiar with Tcl) to write scripts in Tcl using those functions. Here's an example (taken from Macports) that user can come up with: post-configure { if {[variant_isset universal]} { set conflags "" foreach arch ${configure.universal_archs} { if {${arch} == "i386"} {append conflags "x86 "} else { if {${arch} == "ppc64"} {append conflags "ppc_64 "} else { append conflags ${arch} " " } } } set profiles [exec find ${worksrcpath} -name "*.pro"] foreach profile ${profiles} { reinplace -E "s|^(CONFIG\[ \\t].*)|\\1 ${conflags}|" ${profile} # Cures an isolated case system "cd ${worksrcpath}/designer && \ ${qt_dir}/bin/qmake -spec ${qt_dir}/mkspecs/macx-g++ -macx \ -o Makefile python.pro" } } } Here, variant_issset, reinplace are so on (other than Tcl builtins) are implemented as Python functions. if, foreach, set, etc.. are normal Tcl constructs. post-configure is a Python function that accepts, well, a Tcl code block that can later be executed (which in turns would obviously end up calling the above mentioned Python "functions"). Is this possible to do in Python? If so, how? from Tkinter import *; root= Tk(); root.tk.eval('puts [array get tcl_platform]') is the only integration I know of, which is obviously very limited (not to mention the fact that it starts up X11 server on mac).

    Read the article

  • atol(), atof(), atoi() function behaviours, is there a stable way to convert from/to string/integer

    - by Berkay
    In these days i'm playing with the C functions of atol(), atof() and atoi(), from a blog post i find a tutorial and applied: here are my results: void main() char a[10],b[10]; puts("Enter the value of a"); gets(a); puts("Enter the value of b"); gets(b); printf("%s+%s=%ld and %s-%s=%ld",a,b,(atol(a)+atol(b)),a,b,(atol(a)-atol(b))); getch(); } there is atof() which returns the float value of the string and atoi() which returns integer value. now to see the difference between the 3 i checked this code: main() { char a[]={"2545.965"}; printf("atol=%ld\t atof=%f\t atoi=%d\t\n",atol(a),atof(a),atoi(a)); } the output will be atol=2545 atof=2545.965000 atoi=2545 char a[]={“heyyou”}; now when you run the program the following will be the output (why?, is there any solution to convert pure strings to integer?) atol=0 atof=0 atoi=0 the string should contain numeric value now modify this program as char a[]={“007hey”}; the output in this case(tested in Red hat) will be atol=7 atof=7.000000 atoi=7 so the functions has taken 007 only not the remaining part (why?) Now consider this char a[]={“hey007?}; the output of the program will be atol=0 atof=0.000000 atoi=0 So i just want to convert my strings to number and then again to same text, i played with these functions and as you see i'm getting really interesting results? why is that? any other functions to convert from/to string/integer and vice versa?

    Read the article

  • C++ VB6 interfacing problem

    - by Roshan
    Hi, I'm tearing my hair out trying to solve this one, any insights will be much appreciated: I have a C++ exe which acquires data from some hardware in the main thread and processes it in another thread (thread 2). I use a c++ dll to supply some data processing functions which are called from thread 2. I have a requirement to make another set of data processing functions in VB6. I have thus created a VB6 dll, using the add-in vbAdvance to create a standard dll. When I call functions from within this VB6 dll from the main thread, everything works exactly as expected. When I call functions from this VB6 dll in thread 2, I get an access violation. I've traced the error to the CopyMemory command, it would seem that if this is used within the call from the main thread, it's fine but in a call from the process thread, it causes an exception. Why should this be so? As far as I understand, threads share the same address space. Here is the code from my VB dll Public Sub UserFunInterface(ByVal in1ptr As Long, ByVal out1ptr As Long, ByRef nsamples As Long) Dim myarray1() As Single Dim myarray2() As Single Dim i As Integer ReDim myarray1(0 To nsamples - 1) As Single ReDim myarray2(0 To nsamples - 1) As Single With tsa1din(0) ' defined as safearray1d in a global definitions module .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = in1ptr End With With tsa1dout .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = out1ptr End With CopyMemory ByVal VarPtrArray(myarray1), VarPtr(tsa1din(0)), 4 CopyMemory ByVal VarPtrArray(myarray2), VarPtr(tsa1dout), 4 For i = 0 To nsamples - 1 myarray2(i) = myarray1(i) * 2 Next i ZeroMemory ByVal VarPtrArray(myarray1), 4 ZeroMemory ByVal VarPtrArray(myarray2), 4 End Sub

    Read the article

  • Classic ASP application-wide initializations and object caching

    - by slack3r
    In classic ASP (which I am forced to use), I have a few factory functions, that is, functions that return classes. I use JScript. In one include file I use these factory functions to create some classes that are used throughout the application. This include file is included with the #include directive in all pages. These factory functions do some "heavy lifting" and I don't want them to be executed on every page load. So, to make this clear I have something like this: // factory.inc function make_class(arg1, arg2) { function klass() { //... } // ... Some heavy stuff return klass; } // init.inc, included everywhere <!-- #include FILE="factory.inc" --> // ... MyClass1 = make_class(myarg01, myarg02); MyClass2 = make_class(myarg11, myarg12); //... How can I achieve the same effect without calling make_class on every page load? I know that I can't cache the classes in the Application object I can't use the Application_OnStart hook in Global.asa I could probably create a scripting component, but I really don't want to do that So, is there something else I can do? Maybe some way to achieve caching of these classes, which are really objects in JScript. PS: [further clarification] In the above code "heavy stuff" is not so heavy, but I just want to know if there's a way to avoid it being executed all the time. It reads database meta information, builds a table of the primary keys in the database and another table that resolves strings to classes, etc.

    Read the article

  • How do I cast a void pointer to a struct in C?

    - by Rowhawn
    In a project I'm writing code for, I have a void pointer, "implementation", which is a member of a "Hash_map" struct, and points to an "Array_hash_map" struct. The concepts behind this project are not very realistic, but bear with me. The specifications of the project ask that I cast the void pointer "implementation" to an "Array_hash_map" before I can use it in any functions. My question, specifically is, what do I do in the functions to cast the void pointers to the desired struct? Is there one statement at the top of each function that casts them or do I make the cast every time I use "implementation"? Here are the typedefs the structs of a Hash_map and Array_hash_map as well as a couple functions making use of them. typedef struct { Key_compare_fn key_compare_fn; Key_delete_fn key_delete_fn; Data_compare_fn data_compare_fn; Data_delete_fn data_delete_fn; void *implementation; } Hash_map; typedef struct Array_hash_map{ struct Unit *array; int size; int capacity; } Array_hash_map; typedef struct Unit{ Key key; Data data; } Unit; functions: /* Sets the value parameter to the value associated with the key parameter in the Hash_map. */ int get(Hash_map *map, Key key, Data *value){ int i; if (map == NULL || value == NULL) return 0; for (i = 0; i < map->implementation->size; i++){ if (map->key_compare_fn(map->implementation->array[i].key, key) == 0){ *value = map->implementation->array[i].data; return 1; } } return 0; } /* Returns the number of values that can be stored in the Hash_map, since it is represented by an array. */ int current_capacity(Hash_map map){ return map.implementation->capacity; }

    Read the article

  • How would I go about sharing variables in a C++ class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • o write a C++ program to encrypt and decrypt certain codes.

    - by Amber
    Step 1: Write a function int GetText(char[],int); which fills a character array from a requested file. That is, the function should prompt the user to input the filename, and then read up to the number of characters given as the second argument, terminating when the number has been reached or when the end of file is encountered. The file should then be closed. The number of characters placed in the array is then returned as the value of the function. Every character in the file should be transferred to the array. Whitespace should not be removed. When testing, assume that no more than 5000 characters will be read. The function should be placed in a file called coding.cpp while the main will be in ass5.cpp. To enable the prototypes to be accessible, the file coding.h contains the prototypes for all the functions that are to be written in coding.cpp for this assignment. (You may write other functions. If they are called from any of the functions in coding.h, they must appear in coding.cpp where their prototypes should also appear. Do not alter coding.h. Any other functions written for this assignment should be placed, along with their prototypes, with the main function.) Step 2: Write a function int SimplifyText(char[],int); which simplifies the text in the first argument, an array containing the number of characters as given in the second argument, by converting all alphabetic characters to lower case, removing all non-alpha characters, and replacing multiple whitespace by one blank. Any leading whitespace at the beginning of the array should be removed completely. The resulting number of characters should be returned as the value of the function. Note that another array cannot appear in the function (as the file does not contain one). For example, if the array contained the 29 characters "The 39 Steps" by John Buchan (with the " appearing in the array), the simplified text would be the steps by john buchan of length 24. The array should not contain a null character at the end. Step 3: Using the file test.txt, test your program so far. You will need to write a function void PrintText(const char[],int,int); that prints out the contents of the array, whose length is the second argument, breaking the lines to exactly the number of characters in the third argument. Be warned that, if the array contains newlines (as it would when read from a file), lines will be broken earlier than the specified length. Step 4: Write a function void Caesar(const char[],int,char[],int); which takes the first argument array, with length given by the second argument and codes it into the third argument array, using the shift given in the fourth argument. The shift must be performed cyclicly and must also be able to handle negative shifts. Shifts exceeding 26 can be reduced by modulo arithmetic. (Is C++'s modulo operations on negative numbers a problem here?) Demonstrate that the test file, as simplified, can be coded and decoded using a given shift by listing the original input text, the simplified text (indicating the new length), the coded text and finally the decoded text. Step 5: The permutation cypher does not limit the character substitution to just a shift. In fact, each of the 26 characters is coded to one of the others in an arbitrary way. So, for example, a might become f, b become q, c become d, but a letter never remains the same. How the letters are rearranged can be specified using a seed to the random number generator. The code can then be decoded, if the decoder has the same random number generator and knows the seed. Write the function void Permute(const char[],int,char[],unsigned long); with the same first three arguments as Caesar above, with the fourth argument being the seed. The function will have to make up a permutation table as follows: To find what a is coded as, generate a random number from 1 to 25. Add that to a to get the coded letter. Mark that letter as used. For b, generate 1 to 24, then step that many letters after b, ignoring the used letter if encountered. For c, generate 1 to 23, ignoring a or b's codes if encountered. Wrap around at z. Here's an example, for only the 6 letters a, b, c, d, e, f. For the letter a, generate, from 1-5, a 2. Then a - c. c is marked as used. For the letter b, generate, from 1-4, a 3. So count 3 from b, skipping c (since it is marked as used) yielding the coding of b - f. Mark f as used. For c, generate, from 1-3, a 3. So count 3 from c, skipping f, giving a. Note the wrap at the last letter back to the first. And so on, yielding a - c b - f c - a d - b (it got a 2) e - d f - e Thus, for a given seed, a translation table is required. To decode a piece of text, we need the table generated to be re-arranged so that the right hand column is in order. In fact you can just store the table in the reverse way (e.g., if a gets encoded to c, put a opposite c is the table). Write a function called void DePermute(const char[],int,char[], unsigned long); to reverse the permutation cypher. Again, test your functions using the test file. At this point, any main program used to test these functions will not be required as part of the assignment. The remainder of the assignment uses some of these functions, and needs its own main function. When submitted, all the above functions will be tested by the marker's own main function. Step 6: If the seed number is unknown, decoding is difficult. Write a main program which: (i) reads in a piece of text using GetText; (ii) simplifies the text using SimplifyText; (iii) prints the text using PrintText; (iv) requests two letters to swap. If we think 'a' in the text should be 'q' we would type aq as input. The text would be modified by swapping the a's and q's, and the text reprinted. Repeat this last step until the user considers the text is decoded, when the input of the same letter twice (requesting a letter to be swapped with itself) terminates the program. Step 7: If we have a large enough sample of coded text, we can use knowledge of English to aid in finding the permutation. The first clue is in the frequency of occurrence of each letter. Write a function void LetterFreq(const char[],int,freq[]); which takes the piece of text given as the first two arguments (same as above) and returns in the 26 long array of structs (the third argument), the table of the frequency of the 26 letters. This frequency table should be in decreasing order of popularity. A simple Selection Sort will suffice. (This will be described in lectures.) When printed, this summary would look something like v x r s z j p t n c l h u o i b w d g e a q y k f m 168106 68 66 59 54 48 45 44 35 26 24 22 20 20 20 17 13 12 12 4 4 1 0 0 0 The formatting will require the use of input/output manipulators. See the header file for the definition of the struct called freq. Modify the program so that, before each swap is requested, the current frequency of the letters is printed. This does not require further calls to LetterFreq, however. You may use the traditional order of regular letter frequencies (E T A I O N S H R D L U) as a guide when deciding what characters to exchange. Step 8: The decoding process can be made more difficult if blank is also coded. That is, consider the alphabet to be 27 letters. Rewrite LetterFreq and your main program to handle blank as another character to code. In the above frequency order, space usually comes first.

    Read the article

  • Calculating a consecutive streak in data

    - by Jura25
    I’m trying to calculate the maximum winning and losing streak in a dataset (i.e. the highest number of consecutive positive or negative values). I’ve found a somewhat related question here on StackOverflow and even though that gave me some good suggestions, the angle of that question is different, and I’m not (yet) experienced enough to translate and apply that information to this problem. So I was hoping you could help me out, even an suggestion would be great. My data set look like this: > subRes Instrument TradeResult.Currency. 1 JPM -3 2 JPM 264 3 JPM 284 4 JPM 69 5 JPM 283 6 JPM -219 7 JPM -91 8 JPM 165 9 JPM -35 10 JPM -294 11 KFT -8 12 KFT -48 13 KFT 125 14 KFT -150 15 KFT -206 16 KFT 107 17 KFT 107 18 KFT 56 19 KFT -26 20 KFT 189 > split(subRes[,2],subRes[,1]) $JPM [1] -3 264 284 69 283 -219 -91 165 -35 -294 $KFT [1] -8 -48 125 -150 -206 107 107 56 -26 189 In this case, the maximum (winning) streak for JPM is four (namely the 264, 284, 69 and 283 consecutive positive results) and for KFT this value is 3 (107, 107, 56). My goal is to create a function which gives the maximum winning streaks per instrument (i.e. JPM: 4, KFT: 3). To achieve that: R needs to compare the current result with the previous result, and if it is higher then there is a streak of at least 2 consecutive positive results. Then R needs to look at the next value, and if this is also higher: add 1 to the already found value of 2. If this value isn’t higher, R needs to move on to the next value, while remembering 2 as the intermediate maximum. I’ve tried cumsum and cummax in accordance with conditional summing (like cumsum(c(TRUE, diff(subRes[,2]) > 0))), which didn’t work out. Also rle in accordance with lapply (like lapply(rle(subRes$TradeResult.Currency.), function(x) diff(x) > 0)) didn’t work. How can I make this work?

    Read the article

  • Write a C++ program to encrypt and decrypt certain codes.

    - by Amber
    Step 1: Write a function int GetText(char[],int); which fills a character array from a requested file. That is, the function should prompt the user to input the filename, and then read up to the number of characters given as the second argument, terminating when the number has been reached or when the end of file is encountered. The file should then be closed. The number of characters placed in the array is then returned as the value of the function. Every character in the file should be transferred to the array. Whitespace should not be removed. When testing, assume that no more than 5000 characters will be read. The function should be placed in a file called coding.cpp while the main will be in ass5.cpp. To enable the prototypes to be accessible, the file coding.h contains the prototypes for all the functions that are to be written in coding.cpp for this assignment. (You may write other functions. If they are called from any of the functions in coding.h, they must appear in coding.cpp where their prototypes should also appear. Do not alter coding.h. Any other functions written for this assignment should be placed, along with their prototypes, with the main function.) Step 2: Write a function int SimplifyText(char[],int); which simplifies the text in the first argument, an array containing the number of characters as given in the second argument, by converting all alphabetic characters to lower case, removing all non-alpha characters, and replacing multiple whitespace by one blank. Any leading whitespace at the beginning of the array should be removed completely. The resulting number of characters should be returned as the value of the function. Note that another array cannot appear in the function (as the file does not contain one). For example, if the array contained the 29 characters "The 39 Steps" by John Buchan (with the " appearing in the array), the simplified text would be the steps by john buchan of length 24. The array should not contain a null character at the end. Step 3: Using the file test.txt, test your program so far. You will need to write a function void PrintText(const char[],int,int); that prints out the contents of the array, whose length is the second argument, breaking the lines to exactly the number of characters in the third argument. Be warned that, if the array contains newlines (as it would when read from a file), lines will be broken earlier than the specified length. Step 4: Write a function void Caesar(const char[],int,char[],int); which takes the first argument array, with length given by the second argument and codes it into the third argument array, using the shift given in the fourth argument. The shift must be performed cyclicly and must also be able to handle negative shifts. Shifts exceeding 26 can be reduced by modulo arithmetic. (Is C++'s modulo operations on negative numbers a problem here?) Demonstrate that the test file, as simplified, can be coded and decoded using a given shift by listing the original input text, the simplified text (indicating the new length), the coded text and finally the decoded text. Step 5: The permutation cypher does not limit the character substitution to just a shift. In fact, each of the 26 characters is coded to one of the others in an arbitrary way. So, for example, a might become f, b become q, c become d, but a letter never remains the same. How the letters are rearranged can be specified using a seed to the random number generator. The code can then be decoded, if the decoder has the same random number generator and knows the seed. Write the function void Permute(const char[],int,char[],unsigned long); with the same first three arguments as Caesar above, with the fourth argument being the seed. The function will have to make up a permutation table as follows: To find what a is coded as, generate a random number from 1 to 25. Add that to a to get the coded letter. Mark that letter as used. For b, generate 1 to 24, then step that many letters after b, ignoring the used letter if encountered. For c, generate 1 to 23, ignoring a or b's codes if encountered. Wrap around at z. Here's an example, for only the 6 letters a, b, c, d, e, f. For the letter a, generate, from 1-5, a 2. Then a - c. c is marked as used. For the letter b, generate, from 1-4, a 3. So count 3 from b, skipping c (since it is marked as used) yielding the coding of b - f. Mark f as used. For c, generate, from 1-3, a 3. So count 3 from c, skipping f, giving a. Note the wrap at the last letter back to the first. And so on, yielding a - c b - f c - a d - b (it got a 2) e - d f - e Thus, for a given seed, a translation table is required. To decode a piece of text, we need the table generated to be re-arranged so that the right hand column is in order. In fact you can just store the table in the reverse way (e.g., if a gets encoded to c, put a opposite c is the table). Write a function called void DePermute(const char[],int,char[], unsigned long); to reverse the permutation cypher. Again, test your functions using the test file. At this point, any main program used to test these functions will not be required as part of the assignment. The remainder of the assignment uses some of these functions, and needs its own main function. When submitted, all the above functions will be tested by the marker's own main function. Step 6: If the seed number is unknown, decoding is difficult. Write a main program which: (i) reads in a piece of text using GetText; (ii) simplifies the text using SimplifyText; (iii) prints the text using PrintText; (iv) requests two letters to swap. If we think 'a' in the text should be 'q' we would type aq as input. The text would be modified by swapping the a's and q's, and the text reprinted. Repeat this last step until the user considers the text is decoded, when the input of the same letter twice (requesting a letter to be swapped with itself) terminates the program. Step 7: If we have a large enough sample of coded text, we can use knowledge of English to aid in finding the permutation. The first clue is in the frequency of occurrence of each letter. Write a function void LetterFreq(const char[],int,freq[]); which takes the piece of text given as the first two arguments (same as above) and returns in the 26 long array of structs (the third argument), the table of the frequency of the 26 letters. This frequency table should be in decreasing order of popularity. A simple Selection Sort will suffice. (This will be described in lectures.) When printed, this summary would look something like v x r s z j p t n c l h u o i b w d g e a q y k f m 168106 68 66 59 54 48 45 44 35 26 24 22 20 20 20 17 13 12 12 4 4 1 0 0 0 The formatting will require the use of input/output manipulators. See the header file for the definition of the struct called freq. Modify the program so that, before each swap is requested, the current frequency of the letters is printed. This does not require further calls to LetterFreq, however. You may use the traditional order of regular letter frequencies (E T A I O N S H R D L U) as a guide when deciding what characters to exchange. Step 8: The decoding process can be made more difficult if blank is also coded. That is, consider the alphabet to be 27 letters. Rewrite LetterFreq and your main program to handle blank as another character to code. In the above frequency order, space usually comes first.

    Read the article

  • How to catch unintentional function interpositioning?

    - by SiegeX
    Reading through my book Expert C Programming, I came across the chapter on function interpositioning and how it can lead to some serious hard to find bugs if done unintentionally. The example given in the book is the following: my_source.c mktemp() { ... } main() { mktemp(); getwd(); } libc mktemp(){ ... } getwd(){ ...; mktemp(); ... } According to the book, what happens in main() is that mktemp() (a standard C library function) is interposed by the implementation in my_source.c. Although having main() call my implementation of mktemp() is intended behavior, having getwd() (another C library function) also call my implementation of mktemp() is not. Apparently, this example was a real life bug that existed in SunOS 4.0.3's version of lpr. The book goes on to explain the fix was to add the keyword static to the definition of mktemp() in my_source.c; although changing the name altogether should have fixed this problem as well. This chapter leaves me with some unresolved questions that I hope you guys could answer: Does GCC have a way to warn about function interposition? We certainly don't ever intend on this happening and I'd like to know about it if it does. Should our software group adopt the practice of putting the keyword static in front of all functions that we don't want to be exposed? Can interposition happen with functions introduced by static libraries? Thanks for the help. EDIT I should note that my question is not just aimed at interposing over standard C library functions, but also functions contained in other libraries, perhaps 3rd party, perhaps ones created in-house. Essentially, I want to catch any instance of interpositioning regardless of where the interposed function resides.

    Read the article

  • How can I speed up a 1800-line PHP include? It's slowing my pageload down to 10sec/view

    - by somerandomguy
    I designed my code to put all important functions in a single PHP file that's now 1800 lines long. I call it in other PHP files--AJAX processors, for example--with a simple "require_once("codeBank.php")". I'm discovering that it takes about 10 seconds to load up all those functions, even though I have nothing more than a few global arrays and a bunch of other functions involved. The main AJAX processor code, for example, is taking 8 seconds just to do a simple syntax verification (whose operational function is stored in codeBank.php). When I comment out the require_once, my AJAX response time speeds up from 10sec to 40ms, so it's pretty clear that PHP's trying to do something with those 1800 lines of functions. That's even with APC installed, which is surprising. What should I do to get my code speed back to the sub-100ms level? Am I failing to get the cache's benefit somehow? Do I need to cut that single function bank file into different pieces? Are there other subtle things that I could be doing to screw up my response time? Or barring all that, what are some tools to dig further into which PHP operations are hitting speed bumps?

    Read the article

  • cuda 5.0 namespaces for contant memory variable usage

    - by Psypher
    In my program I want to use a structure containing constant variables and keep it on device all long as the program executes to completion. I have several header files containing the declaration of 'global' functions and their respective '.cu' files for their definitions. I kept this scheme because it helps me contain similar code in one place. e.g. all the 'device' functions required to complete 'KERNEL_1' are separated from those 'device' functions required to complete 'KERNEL_2' along with kernels definitions. I had no problems with this scheme during compilation and linking. Until I encountered constant variables. I want to use the same constant variable through all kernels and device functions but it doesn't seem to work. ########################################################################## CODE EXAMPLE ########################################################################### filename: 'common.h' -------------------------------------------------------------------------- typedef struct { double height; double weight; int age; } __CONSTANTS; __constant__ __CONSTANTS d_const; --------------------------------------------------------------------------- filename: main.cu --------------------------------------------------------------------------- #include "common.h" #include "gpukernels.h" int main(int argc, char **argv) { __CONSTANTS T; T.height = 1.79; T.weight = 73.2; T.age = 26; cudaMemcpyToSymbol(d_consts, &T, sizeof(__CONSTANTS)); test_kernel <<< 1, 16 >>>(); cudaDeviceSynchronize(); } --------------------------------------------------------------------------- filename: gpukernels.h --------------------------------------------------------------------------- __global__ void test_kernel(); --------------------------------------------------------------------------- filename: gpukernels.cu --------------------------------------------------------------------------- #include <stdio.h> #include "gpukernels.h" #include "common.h" __global__ void test_kernel() { printf("Id: %d, height: %f, weight: %f\n", threadIdx.x, d_const.height, d_const.weight); } When I execute this code, the kernel executes, displays the thread ids, but the constant values are displayed as zeros. How can I fix this?

    Read the article

  • How do I call C++/CLI (.NET) DLLs from standard, unmanaged non-.NET applications?

    - by tronjohnson
    In the unmanaged world, I was able to write a __declspec(dllexport) or, alternatively, use a .DEF file to expose a function to be able to call a DLL. (Because of name mangling in C++ for the __stdcall, I put aliases into the .DEF file so certain applications could re-use certain exported DLL functions.) Now, I am interested in being able to expose a single entry-point function from a .NET assembly, in unmanaged-fashion, but have it enter into .NET-style functions within the DLL. Is this possible, in a simple and straight-forward fashion? What I have is a third-party program that I have extended through DLLs (plugins) that implement some complex mathematics. However, the third-party program has no means for me to visualize the calculations. I want to somehow take these pre-written math functions, compile them into a separate DLL (but using C++/CLI in .NET), but then add hooks to the functions so I can render what's going on under the hood in a .NET user control. I'm not sure how to blend the .NET stuff with the unmanaged stuff, or what to Google to accomplish this task. Specific suggestions with regard to the managed/unmanaged bridge, or alternative methods to accomplish the rendering in the manner I have described would be helpful. Thank you.

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • Configure clean URLs using Laravel using a rewrite rule to index.php

    - by yannis hristofakis
    Recently I've started learning Laravel , I have none experience with framework before. I'm encountering the following problem .I'm trying to configure the .htaccess file so I can have clean URLs but the only thing I get are 404 Not Found error pages. I have created a virtual host - you can see below the configuration file - and changed the .htaccesss file on the public directory. /etc/apache2/sites-available <VirtualHost *:80> ServerAdmin [email protected] ServerName laravel.lar DocumentRoot "/home/giannis/Desktop/laravel/public" <Directory "/home/giannis/Desktop/laravel/public"> Options Indexes FollowSymLinks MultiViews AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccesss file: laravel/public # Apache configuration file # http://httpd.apache.org/docs/current/mod/quickreference.html # Note: ".htaccess" files are an overhead for each request. This logic should # be placed in your Apache config whenever possible. # http://httpd.apache.org/docs/current/howto/htaccess.html # Turning on the rewrite engine is necessary for the following rules and # features. "+FollowSymLinks" must be enabled for this to work symbolically. <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On </IfModule> # For all files not found in the file system, reroute the request to the # "index.php" front controller, keeping the query string intact <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> In order to test it, I have created a view named about and made the proper routing. If I link to http://laravel.lar/index.php/about/ I'm routing to the about page instead if I link to http://laravel.lar/about/ I get a 404 Not Found error. I'm using a Debian based system.

    Read the article

  • Windows 8 UX Guidelines in one PDF

    - by nmarun
    There are quite a few things you need do to differently in order to write a great Windows 8 App. Although MSDN has it documented completely in their site , the sheer volume of other related information might overwhelm you. In order to make it easy, they have a single pdf with all the relevant information. The file will also serve as a ‘quick ref’ document whether you are developing using C#-XAML or HTML5-JS-CSS or C++-DirectX style. And yes, this has been updated for the RTM version....(read more)

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #050

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Executing Remote Stored Procedure – Calling Stored Procedure on Linked Server In this example we see two different methods of how to call Stored Procedures remotely.  Connection Property of SQL Server Management Studio SSMS A very simple example of the how to build connection properties for SQL Server with the help of SSMS. Sample Example of RANKING Functions – ROW_NUMBER, RANK, DENSE_RANK, NTILE SQL Server has a total of 4 ranking functions. Ranking functions return a ranking value for each row in a partition. All the ranking functions are non-deterministic. T-SQL Script to Add Clustered Primary Key Jr. DBA asked me three times in a day, how to create Clustered Primary Key. I gave him following sample example. That was the last time he asked “How to create Clustered Primary Key to table?” 2008 2008 – TRIM() Function – User Defined Function SQL Server does not have functions which can trim leading or trailing spaces of any string at the same time. SQL does have LTRIM() and RTRIM() which can trim leading and trailing spaces respectively. SQL Server 2008 also does not have TRIM() function. User can easily use LTRIM() and RTRIM() together and simulate TRIM() functionality. http://www.youtube.com/watch?v=1-hhApy6MHM 2009 Earlier I have written two different articles on the subject Remove Bookmark Lookup. This article is as part 3 of original article. Please read the first two articles here before continuing reading this article. Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 Interesting Observation – Query Hint – FORCE ORDER SQL Server never stops to amaze me. As regular readers of this blog already know that besides conducting corporate training, I work on large-scale projects on query optimizations and server tuning projects. In one of the recent projects, I have noticed that a Junior Database Developer used the query hint Force Order; when I asked for details, I found out that the basic concept was not properly understood by him. Queries Waiting for Memory Allocation to Execute In one of the recent projects, I was asked to create a report of queries that are waiting for memory allocation. The reason was that we were doubtful regarding whether the memory was sufficient for the application. The following query can be useful in similar cases. Queries that do not have to wait on a memory grant will not appear in the result set of following query. 2010 Quickest Way to Identify Blocking Query and Resolution – Dirty Solution As the title suggests, this is quite a dirty solution; it’s not as elegant as you expect. However, it works totally fine. Simple Explanation of Data Type Precedence While I was working on creating a question for SQL SERVER – SQL Quiz – The View, The Table and The Clustered Index Confusion, I had actually created yet another question along with this question. However, I felt that the one which is posted on the SQL Quiz is much better than this one because what makes that more challenging question is that it has a multiple answer. Encrypted Stored Procedure and Activity Monitor I recently had received questionable if any stored procedure is encrypted can we see its definition in Activity Monitor.Answer is - No. Let us do a quick test. Let us create following Stored Procedure and then launch the Activity Monitor and check the text. Indexed View always Use Index on Table A single table can have maximum 249 non clustered indexes and 1 clustered index. In SQL Server 2008, a single table can have maximum 999 non clustered indexes and 1 clustered index. It is widely believed that a table can have only 1 clustered index, and this belief is true. I have some questions for all of you. Let us assume that I am creating view from the table itself and then create a clustered index on it. In my view, I am selecting the complete table itself. 2011 Detecting Database Case Sensitive Property using fn_helpcollations() I received a question on how to determine the case sensitivity of the database. The quick answer to this is to identify the collation of the database and check the properties of the collation. I have previously written how one can identify database collation. Once you have figured out the collation of the database, you can put that in the WHERE condition of the following T-SQL and then check the case sensitivity from the description. Server Side Paging in SQL Server CE (Compact Edition) SQL Server Denali is coming up with new T-SQL of Paging. I have written about the same earlier.SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative,  SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison, SQL SERVER – Server Side Paging in SQL Server Denali – Part2 What is very interesting is that SQL Server CE 4.0 have the same feature introduced. Here is the quick example of the same. To run the script in the example, you will have to do installWebmatrix 4.0 and download sample database. Once done you can run following script. Why I am Going to Attend PASS Summit Unite 2011 The four-day event will be marked by a lot of learning, sharing, and networking, which will help me increase both my knowledge and contacts. Every year, PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. 2012 Manage Help Settings – CTRL + ALT + F1 This is very interesting read as my daughter once accidently came across a screen in SQL Server Management Studio. It took me 2-3 minutes to figure out how she has created the same screen. Recover the Accidentally Renamed Table “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you an idea about what your table name can be if you are lucky. Identify Numbers of Non Clustered Index on Tables for Entire Database Here is the script which will give you numbers of non clustered indexes on any table in entire database. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video Here is the complete complete script which I have used in the SQL in Sixty Seconds Video. Thanks Harsh for important Tip in the comment. http://www.youtube.com/watch?v=3kDHC_Tjrns Advanced Data Quality Services with Melissa Data – Azure Data Market For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Blazing Keywords - The Google Blazing Keywords Review

    Many people who are currently attempting different methods of online marketing in order to promote and build their business have heard that keyword research is extremely vital to the success of your online marketing. Unfortunately most online marketing companies do not properly teach their members how to effectively do their keyword research in order to get good results and because of that many people are left to look for services that promise to do this for them.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >