Search Results

Search found 6001 results on 241 pages for 'requires'.

Page 213/241 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Keeping track of business rules within IT department?

    - by evaldas-alexander
    I am looking for the best way to keep track of the business rules for both developers and everybody else (support staff / management) in a startup enviroment. The challenge is that our business model requires quite a lot of different business rules, which are created pretty much on the fly and evolving organically after that. After running this project for 3+ years, we have so many of such rules that often the only way to be sure about what the application is supposed to do in a certain situation is to go find the module responsible for that process and analyze its code and comments. That is all fine as long as you have one single developer who created the entire application from the scratch, but every new developer needs to go over pretty much entire codebase in order to understand how the application works. Even bigger problem is that non technical employees don't even have that option and therefore are forced to ask me pretty much every day how some certain case would be handled by the application. Quick example - we only start charging for our customer campaigns once they have been active for at least 72 hours, but at the same time we stop creating invoices for campaigns that belong to insolvent accounts and close such accounts within a month of the first failed charge. That does not apply to accounts that are set to "non-chargeable" which most commonly belongs to us since we are using the service ourselves. The invoices are created on the 1st of each month and include charges from the previous month + any current balance that the account might have. However, some customers are charged only 4 days after their invoice has been generated due to issues with their billing department. In addition to that, invoices are also created when customer deactivates his campaign, but that can only be done once the campaign is not longer under mandatory 6 month contract, unless account manager approves early deactivation. I know, that's quite a lot of rules that need to be taken into account when answering a question "when do we bill our customers", but actually I could still append an asterisk at the end of each sentence in order to disclose some rare exceptions. Of course, it would be easiest just to keep the business rules to the minimum, but we need to adapt to changing marketplace - i.e. less than a year ago we had no contracts whatsoever. One idea that I had so far was a simplistic wiki with categories corresponding to areas such as "Account activation", "Invoicing", "Collection procedures" and so on. Another idea would be to have giant interactive flowchart showing the entire customer "life cycle" from prospecting to account deactivation. What are your experiences / suggestions?

    Read the article

  • Web Services Primer for a WinForms Developer?

    - by Unicorns
    I've been writing client/server applications with Winforms for about six years now, but I have yet to venture into the web space (neither ASP.NET nor web services). Given the direction that the job market has been heading for some time and the fact that I have a basic curiosity, I'd like to get involved with writing web services, but I don't know where to start. I've read about various options (XML/SOAP vs. JSON, REST vs...well, actually I don't know what it's called, etc.), but I'm not sure what sort of criteria are in play when making the determination to use one or the other. Obviously, I'd like to leverage the tools that I have (Visual Studio, the .NET framework, etc.) without hamstringing myself into only targeting a particular audience (i.e. writing the service in such a way as to make it difficult to consume from a Windows Mobile/Android/iPhone client, for example). For the record, my plan--for now--is to use WCF for my web service development, but I'm open to using another .NET approach if that's advisable. I realize that this question is pretty open-ended so it may get closed, but here are some things I'm wondering: What are some things to consider when choosing the type of web service (REST, etc.) I intend to write? Is it possible (and, if so, feasible) to move from one approach to another? Can web services be written in an event-driven way? As I said I'm a Winforms developer, so I'm used to objects raising events for me to react to. For instance, if I have two clients connected to my service, is there a way for me to "push" information to one of them as a result of an action by the other? If this is possible, is this advisable or am I just not thinking about it correctly? What authentication mechanisms seem to work best for public-facing services? What about if I plan to have different types of OS'es and clients connecting to the service? Is there a generally accepted platform-agnostic approach? In the line of authentication, is this something that I should be doing myself (authenticating an managing sessions, etc.) or is this something should be handled at the framework level and I just define exactly how it should work? If that's the case, how do I tell who the requester has authenticated themselves as? I started writing an authentication mechanism (simple username/password combinations stored in the database and a corresponding session table with a GUID key) within my service and just requiring that key to be passed with every operation (other than logging in, of course), but I want to make sure that I'm not reinventing the wheel here. However, I also don't want to clutter up the server with a bunch of machine user accounts just to use Basic authentication. I'm also under the impression that Digest (and of course Windows) authentication requires a machine (or AD) user account.

    Read the article

  • Correct way to make datasources/resources a deploy-time setting

    - by Draemon
    I have a web-app that requires two settings: A JDBC datasource A string token I desperately want to be able to deploy one .war to various different containers (jetty,tomcat,gf3 minimum) and configure these settings at application level within the container. My code does this: InitialContext ctx = new InitialContext(); Context envCtx = (javax.naming.Context) ctx.lookup("java:comp/env"); token = (String)envCtx.lookup("token"); ds = (DataSource)envCtx.lookup("jdbc/datasource") Let's assume I've used the glassfish management interface to create two jdbc resources: jdbc/test-datasource and jdbc/live-datasource which connect to different copies of the same schema, on different servers, different credentials etc. Say I want to deploy this to glassfish with and point it at the test datasource, I might have this in my sun-web.xml: ... <resource-ref> <res-ref-name>jdbc/datasource</res-ref-name> <jndi-name>jdbc/test-datasource</jndi-name> </resource-ref> ... but sun-web.xml goes inside my war, right? surely there must be a way to do this through the management interface Am I even trying to do the right thing? Do other containers make this any easier? I'd be particularly interested in how jetty 7 handles this since I use it for development. EDIT Tomcat has a reasonable way to do this: Create $TOMCAT_HOME/conf/Catalina/localhost/webapp.xml with: <?xml version="1.0" encoding="UTF-8"?> <Context antiResourceLocking="false" privileged="true"> <!-- String resource --> <Environment name="token" value="value of token" type="java.lang.String" override="false" /> <!-- Linking to a global resource --> <ResourceLink name="jdbc/datasource1" global="jdbc/test" type="javax.sql.DataSource" /> <!-- Derby --> <Resource name="jdbc/datasource2" type="javax.sql.DataSource" auth="Container" driverClassName="org.apache.derby.jdbc.EmbeddedDataSource" url="jdbc:derby:test;create=true" /> <!-- H2 --> <Resource name="jdbc/datasource3" type="javax.sql.DataSource" auth="Container" driverClassName="org.h2.jdbcx.JdbcDataSource" url="jdbc:h2:~/test" username="sa" password="" /> </Context> Note that override="false" means the opposite. It means that this setting can't be overriden by web.xml. I like this approach because the file is part of the container configuration not the war, but it's not part of the global configuration; it's webapp specific. I guess I expect a bit more from glassfish since it is supposed to have a full web admin interface, but I would be happy enough with something equivalent to the above.

    Read the article

  • Isn't the C++ standard library backward-compatible?

    - by Chris Metzler
    Hi. I'm working on a 64-bit Linux system, trying to build some code that depends on third-party libraries for which I have binaries. During linking, I get a stream of undefined reference errors for one of the libraries, indicating that the linker couldn't resolve references to standard C++ functions/classes, e.g.: librxio.a(EphReader.o): In function `gpstk::EphReader::read_fic_data(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': EphReader.cpp:(.text+0x27c): undefined reference to `std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)' EphReader.cpp:(.text+0x4e8): undefined reference to `std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)' I'm not really a C++ programmer, but this looks to me like it can't find the standard library. Doing some more research, I got the following when I looked at librxio's dependency for the standard library: $ ldd librxio.so.16.0 ./librxio.so.16.0: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./librxio.so.16.0) libm.so.6 => /lib64/libm.so.6 (0x00002aaaaad45000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002aaaaafc8000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002aaaab2c8000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaab4d7000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) So I read that as saying that librxio (one of the third-party libraries) requires at least v3.4.9 of the standard library. But the version I have installed is 4.1.2: $ rpm -qa | grep libstdc compat-libstdc++-33-3.2.3-61.x86_64 libstdc++-devel-4.1.2-14.el5.i386 libstdc++-devel-4.1.2-14.el5.x86_64 libstdc++-4.1.2-14.el5.x86_64 libstdc++-4.1.2-14.el5.i386 Shouldn't this work? The shared object major number is 6, same as for v3.4.9. At this level, shouldn't this be backward compatible? It seems like the third-party library is looking for an earlier version of the standard library than what I have installed; but isn't there backward compatibility between versions with the same major number for the shared library? Again, I'm not really a C++ programmer; but I don't see what the problem is. Any advice greatly appreciated. Thanks.

    Read the article

  • What is the best practice for adding persistence to an MVC model?

    - by etheros
    I'm in the process of implementing an ultra-light MVC framework in PHP. It seems to be a common opinion that the loading of data from a database, file etc. should be independent of the Model, and I agree. What I'm unsure of is the best way to link this "data layer" into MVC. Datastore interacts with Model //controller public function update() { $model = $this->loadModel('foo'); $data = $this->loadDataStore('foo', $model); $data->loadBar(9); //loads data and populates Model $model->setBar('bar'); $data->save(); //reads data from Model and saves } Controller mediates between Model and Datastore Seems a bit verbose and requires the model to know that a datastore exists. //controller public function update() { $model = $this->loadModel('foo'); $data = $this->loadDataStore('foo'); $model->setDataStore($data); $model->getDataStore->loadBar(9); //loads data and populates Model $model->setBar('bar'); $model->getDataStore->save(); //reads data from Model and saves } Datastore extends Model What happens if we want to save a Model extending a database datastore to a flatfile datastore? //controller public function update() { $model = $this->loadHybrid('foo'); //get_class == Datastore_Database $model->loadBar(9); //loads data and populates $model->setBar('bar'); $model->save(); //saves } Model extends datastore This allows for Model portability, but it seems wrong to extend like this. Further, the datastore cannot make use of any of the Model's methods. //controller extends model public function update() { $model = $this->loadHybrid('foo'); //get_class == Model $model->loadBar(9); //loads data and populates $model->setBar('bar'); $model->save(); //saves } EDIT: Model communicates with DAO //model public function __construct($dao) { $this->dao = $dao; } //model public function setBar($bar) { //a bunch of business logic goes here $this->dao->setBar($bar); } //controller public function update() { $model = $this->loadModel('foo'); $model->setBar('baz'); $model->save(); } Any input on the "best" option - or alternative - is most appreciated.

    Read the article

  • Consolidating coding styles: Funcs, private method, single method classes

    - by jdoig
    Hi all, We currently have 3 devs with, some, conflicting styles and I'm looking for a way to bring peace to the kingdom... The Coders: Foo 1: Likes to use Func's & Action's inside public methods. He uses actions to alias off lengthy method calls and Func's to perform simple tasks that can be expressed in 1 or 2 lines and will be used frequently through out the code Pros: The main body of his code is succinct and very readable, often with only one or 2 public methods per class and rarely any private methods. Cons: The start of methods contain blocks of lambda rich code that other developers don't enjoy reading; and, on occasion, can contain higher order functions that other dev's REALLY don't like reading. Foo 2: Likes to create a private method for (almost) everything the public method will have to do . Pros: Public methods remain small and readable (to all developers). Cons: Private methods are numerous. With private methods that call into other private methods, that call into... etc, etc. Making code hard to navigate. Foo 3: Likes to create a public class with a, single, public method for every, non-trivial, task that needs performing, then dependency inject them into other objects. Pros: Easily testable, easy to understand (one object, one responsibility). Cons: project gets littered by classes, opening multiple class files to understand what code does makes navigation awkward. It would be great to take the best of all these techniques... Foo-1 Has really nice, readable (almost dsl-like) code... for the most part, except for all the Action and Func lambda shenanigans bulked together at the start of a method. Foo-3 Has highly testable and extensible code that just feels a bit "belt-&-braces" for some solutions and has some code-navigation niggles (constantly hitting F12 in VS and opening 5 other .cs files to find out what a single method does). And Foo-2... Well I'm not sure I like anything about the one-huge .cs file with 2 public methods and 12 private ones, except for the fact it's easier for juniors to dig into. I admit I grossly over-simplified the explanations of those coding styles; but if any one knows of any patterns, practices or diplomatic-manoeuvres that can help unite our three developers (without just telling any of them to just "stop it!") that would be great. From a feasibility standpoint : Foo-1's style meets with the most resistance due to some developers finding lambda and/or Func's hard to read. Foo-2's style meets with a less resistance as it's just so easy to fall into. Foo-3's style requires the most forward thinking and is difficult to enforce when time is short. Any ideas on some coding styles or conventions that can make this work?

    Read the article

  • How do you use technology to memorize set of terms?

    - by user49767
    Always there are few set of items needs to be memorized in short span of time. Here are my following cases. 1) My Job requires some set of items needs to be memorized. 2) I am a developer who has to learn 150+ tags within next 3 days. 3) Fix developer/support has to remember minimum of 125+ tags (set of possible values). 4) It is better if team's SQL developer knows all the table and columns in my database. 5) When guys join new department or job. Memorizing few related items will definitely gives some benefit. Most of the cases, I suggest people to understand the domain better and nothing wrong in using google (but remember correct search-word). But recently I came across a junior developer who took lot of effort in memorizing set of things (150+ table structures, fix protocol tags, almost 300+ configuration items from property file) and was very very successful in his job and was swift in responding for support queries. Needless to say he is smart worker too (not a dumb guy). When I try to recollect some of the successful employees I met, they were so good in remembering entire schema and they did in short span of time. But I don't argue that memorizing alone gives success, but it greatly helps when situation demands. Here my question is, I am not good at remembering things, but it shouldn't be lame excuse. Hence I am evaluating using technolgies better to memorize set of items. Not very much interested in memory techniques (mnemoninc, photography memory, etc..). Even I have recorded 100+ items and listen to that whenever I found free time, defintely there were some fruitful result. Now I need your suggestion about what are all the ways to exploit technology to memorize. There could be so many reason why guys remember a subject (passionate, essential, author, creator, responsbile). Not interested in dissecting why guys remeber. Rather much interested in using ways, and techniques (cheat sheet...) to remember a set of itmes. Note : I appreciate, encourage people who could rephrase my question better. Note : I have kept couple of cheat-sheet close to my monitor, honestly it did not help me :).

    Read the article

  • What is a good platform for building a game framework targetting both web and native languages?

    - by fuzzyTew
    I would like to develop (or find, if one is already in development) a framework with support for accelerated graphics and sound built on a system flexible enough to compile to the following: native ppc/x86/x86_64/arm binaries or a language which compiles to them javascript actionscript bytecode or a language which compiles to it (actionscript 3, haxe) optionally java I imagine, for example, creating an API where I can open windows and make OpenGL-like calls and the framework maps this in a relatively efficient manner to either WebGL with a canvas object, 3d graphics in Flash, OpenGL ES 2 with EGL, or desktop OpenGL in an X11, Windows, or Cocoa window. I have so far looked into these avenues: Building the game library in haXe Pros: Targets exist for php, javascript, actionscript bytecode, c++ High level, object oriented language Cons: No support for finally{} blocks or destructors, making resource cleanup difficult C++ target does not allow room for producing highly optimized libraries -- the foreign function interface requires all primitive types be boxed in a wrapper object, as if writing bindings for a scripting language; these feel unideal for real-time graphics and audio, especially exporting low-level functions. Doesn't seem quite yet mature Using the C preprocessor to create a translator, writing programs entirely with macros Pros: CPP is widespread and simple to use Cons: This is an arduous task and probably the wrong tool for the job CPP implementations differ widely in support for features (e.g. xcode cpp has no variadic macros despite claiming C99 compliance) There is little-to-no room for optimization in this route Using llvm's support for multiple backends to target c/c++ to web languages Pros: Can code in c/c++ LLVM is a very mature highly optimizing compiler performing e.g. global inlining Targets exist for actionscript (alchemy) and javascript (emscripten) Cons: Actionscript target is closed source, unmaintained, and buggy. Javascript targets do not use features of HTML5 for appropriate optimization (e.g. linear memory with typed arrays) and are immature An LLVM target must convert from low-level bytecode, so high-level constructs are lost and bloated unreadable code is created from translating individual instructions, which may be more difficult for an unprepared JIT to optimize. "jump" instructions cause problems for languages with no "goto" statements. Using libclang to write a translator from C/C++ to web languages Pros: A beautiful parsing library providing easy access to the code structure Can code in C/C++ Has sponsored developer effort from Apple Cons: Incomplete; current feature set targets IDEs. Basic operators are unexposed and must be manually parsed from the returned AST element to be identified. Translating code prior to compilation may forgo optimizations assumed in c/c++ such as inlining. Creating new code generators for clang to translate into web languages Pros: Can code in C/C++ as libclang Cons: There is no API; code structure is unstable A much larger job than using libclang; the innards of clang are complex Building the game library in Common Lisp Pros: Flexible, ancient, well-developed language Extensive introspection should ease writing translators Translators exist for at least javascript Cons: Unfamiliar language No standardized library functions, widely varying implementations Which of these avenues should I pursue? Do you know of any others, or any systems that might be useful? Does a general project like this exist somewhere already? Thank you for any input.

    Read the article

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • Runge-Kutta Method with adaptive step

    - by infoholic_anonymous
    I am implementing Runge-Kutta method with adaptive step in matlab. I get different results as compared to matlab's own ode45 and my own implementation of Runge-Kutta method with fixed step. What am I doing wrong in my code? Is it possible? function [ result ] = rk4_modh( f, int, init, h, h_min ) % % f - function handle % int - interval - pair (x_min, x_max) % init - initial conditions - pair (y1(0),y2(0)) % h_min - lower limit for h (step length) % h - initial step length % x - independent variable ( for example time ) % y - dependent variable - vertical vector - in our case ( y1, y2 ) function [ k1, k2, k3, k4, ka, y ] = iteration( f, h, x, y ) % core functionality performed within loop k1 = h * f(x,y); k2 = h * f(x+h/2, y+k1/2); k3 = h * f(x+h/2, y+k2/2); k4 = h * f(x+h, y+k3); ka = (k1 + 2*k2 + 2*k3 + k4)/6; y = y + ka; end % constants % relative error eW = 1e-10; % absolute error eB = 1e-10; s = 0.9; b = 5; % initialization i = 1; x = int(1); y = init; while true hy = y; hx = x; %algorithm [ k1, k2, k3, k4, ka, y ] = iteration( f, h, x, y ); % error estimation for j=1:2 [ hk1, hk2, hk3, hk4, hka, hy ] = iteration( f, h/2, hx, hy ); hx = hx + h/2; end err(:,i) = abs(hy - y); % step adjustment e = abs( hy ) * eW + eB; a = min( e ./ err(:,i) )^(0.2); mul = a * s; if mul >= 1 % step length admitted keepH(i) = h; k(:,:,i) = [ k1, k2, k3, k4, ka ]; previous(i,:) = [ x+h, y' ]; %' i = i + 1; if floor( x + h + eB ) == int(2) break; else h = min( [mul*h, b*h, int(2)-x] ); x = x + keepH(i-1); end else % step length requires further adjustments h = mul * h; if ( h < h_min ) error('Computation with given precision impossible'); end end end result = struct( 'val', previous, 'k', k, 'err', err, 'h', keepH ); end The function in question is: function [ res ] = fun( x, y ) % res(1) = y(2) + y(1) * ( 0.9 - y(1)^2 - y(2)^2 ); res(2) = -y(1) + y(2) * ( 0.9 - y(1)^2 - y(2)^2 ); res = res'; %' end The call is: res = rk4( @fun, [0,20], [0.001; 0.001], 0.008 ); The resulting plot for x1 : The result of ode45( @fun, [0, 20], [0.001, 0.001] ) is:

    Read the article

  • Is there simple way how to join two RouteValueDictionary values to pass parameters to Html.ActionLin

    - by atagaew
    Hi. Look on my code that i created in a partial View: <% foreach (Customer customerInfo in Model.DataRows) {%> <tr> <td> <%=Html.ActionLink( customerInfo.FullName , ((string)ViewData["ActionNameForSelectedCustomer"]) , JoinParameters(customerInfo.id, (RouteValueDictionary) ViewData["AdditionalSelectionParameters"]) , null)%> </td> <td> <%=customerInfo.LegalGuardianName %> </td> <td> <%=customerInfo.HomePhone %> </td> <td> <%=customerInfo.CellPhone %> </td> </tr> <%}%> Here I'm building simple table that showing customer's details. As you may see, in each row, I'm trying to build a link that will redirect to another action. That action requires customerId and some additional parameters. Additional parameters are different for each page where this partial View is using. So, i decided to make Action methods to pass that additional parameters in the ViewData as RouteValueDictionary instance. Now, on the view i have a problem, i need to pass customerId and that RouteValueDictionary together into Html.ActionLink method. That makes me to figure out some way of how to combine all that params into one object (either object or new RouteValueDictionary instance) Because of the way the MVC does, i can't create create a method in the codebehind class (there is no codebihind in MVC) that will join that parameters. So, i used ugly way - inserted inline code: ...script runat="server"... private RouteValueDictionary JoinParameters(int customerId, RouteValueDictionary defaultValues) { RouteValueDictionary routeValueDictionary = new RouteValueDictionary(defaultValues); routeValueDictionary.Add("customerId", customerId); return routeValueDictionary; } ...script... This way is very ugly for me, because i hate to use inline code in the View part. My question is - is there any better way of how i can mix parameters passed from the action (in ViewData, TempData, other...) and the parameter from the view when building action links. May be i can build this link in other way ? Thanks!

    Read the article

  • Android Couchdb - libcouch and IPC Aidl Services

    - by dirtySanchez
    I am working on a native CouchdDB app with android. Now just this week CouchOne released libcouch, described as "Library files needed to interact with CouchDB on Android": couchone_libcouch@Github It is a basic app that installs CouchDB if the CouchDB service (that comes with CouchDB if it was installed previously) can't be bound to. To be more precise, as I understand it: libcouch estimates CouchDb's presence on the device by trying to bind to a IPC Service from CouchDB and through that service wants communicate with CouchDB. Please see the method "attemptLaunch()" at CouchAppLauncher.class for reviewing this: public void attemptLaunch() { Log.i(TAG,"1.) called attemptLaunch"); Intent intent = new Intent(ICouchService.class.getName()); Log.i(TAG,"1.a) setup Intent"); Boolean canStart = bindService(intent, couchServiceConn, Context.BIND_AUTO_CREATE); Log.i(TAG,"1.b bound service. canStart: " + Boolean.toString(canStart)); if (!canStart) { setContentView(R.layout.install_couchdb); TextView label = (TextView) findViewById(R.id.install_couchdb_text); Button btn = (Button) this.findViewById(R.id.install_couchdb_btn); String text = getString(R.string.app_name) + " requires Apache CouchDB to be installed."; label.setText(text); // Launching the market will fail on emulators btn.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { launchMarket(); finish(); } }); } } The question(s) I have about this are: libcouch never is able to "find" a previously installed CouchDB. It always attempts to install CouchDB from the market. This is because it never actually is able to bind to the CouchDBService. As I understand the purpose auf AIDL generated service interfaces, the actual service that intends to offer it's IPC to other applications should make use of AIDL. In this case the AIDL has been moved to the application that is trying to bind to the remote service, which is libcouch in this case. Reviewing the commits the AIDL files have just been moved out of that repository to libcouch. For complete linkage, here's the link to the Android CouchDB sources: github.com/couchone/libcouch-android Now, I could be completely wrong in my findings, it could also be lincouch's Manifest that s missing something, but I am really looking forward to get some answers!

    Read the article

  • Android - Linkify Problem

    - by Ryan
    I seem to be having trouble with the linkify I am using in my Custom Adapter. For some reason I recieve the following stack track when I click on one of the links: 06-07 20:49:34.696: ERROR/AndroidRuntime(813): Uncaught handler: thread main exiting due to uncaught exception 06-07 20:49:34.745: ERROR/AndroidRuntime(813): android.util.AndroidRuntimeException: Calling startActivity() from outside of an Activity context requires the FLAG_ACTIVITY_NEW_TASK flag. Is this really what you want? 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.app.ApplicationContext.startActivity(ApplicationContext.java:550) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.content.ContextWrapper.startActivity(ContextWrapper.java:248) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.text.style.URLSpan.onClick(URLSpan.java:62) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.text.method.LinkMovementMethod.onTouchEvent(LinkMovementMethod.java:216) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.widget.TextView.onTouchEvent(TextView.java:6560) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.View.dispatchTouchEvent(View.java:3709) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1659) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1107) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.app.Activity.dispatchTouchEvent(Activity.java:2061) Here is the code that is calling it: TextView bot = new TextView( c ); bot.setText(li.getBottomText()); bot.setTextColor(Color.BLACK); bot.setTextSize(12); bot.setPadding(50, 35, 0, 10); Linkify.addLinks(bot, Linkify.ALL); rL.addView(bot,ViewGroup.LayoutParams.FILL_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT); I understand what the error is saying but I am not sure how to fix it. Does anyone have any ideas? Thanks in advance for your help!

    Read the article

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • Coroutines in Java

    - by JUST MY correct OPINION
    I would like to do some stuff in Java that would be clearer if written using concurrent routines, but for which full-on threads are serious overkill. The answer, of course, is the use of coroutines, but there doesn't appear to be any coroutine support in the standard Java libraries and a quick Google on it brings up tantalising hints here or there, but nothing substantial. Here's what I've found so far: JSIM has a coroutine class, but it looks pretty heavyweight and conflates, seemingly, with threads at points. The point of this is to reduce the complexity of full-on threading, not to add to it. Further I'm not sure that the class can be extracted from the library and used independently. Xalan has a coroutine set class that does coroutine-like stuff, but again it's dubious if this can be meaningfully extracted from the overall library. It also looks like it's implemented as a tightly-controlled form of thread pool, not as actual coroutines. There's a Google Code project which looks like what I'm after, but if anything it looks more heavyweight than using threads would be. I'm basically nervous of something that requires software to dynamically change the JVM bytecode at runtime to do its work. This looks like overkill and like something that will cause more problems than coroutines would solve. Further it looks like it doesn't implement the whole coroutine concept. By my glance-over it gives a yield feature that just returns to the invoker. Proper coroutines allow yields to transfer control to any known coroutine directly. Basically this library, heavyweight and scary as it is, only gives you support for iterators, not fully-general coroutines. The promisingly-named Coroutine for Java fails because it's a platform-specific (obviously using JNI) solution. And that's about all I've found. I know about the native JVM support for coroutines in the Da Vinci Machine and I also know about the JNI continuations trick for doing this. These are not really good solutions for me, however, as I would not necessarily have control over which VM or platform my code would run on. (Indeed any bytecode manipulation system would suffer similar problems -- it would be best were this pure Java if possible. Runtime bytecode manipulation would restrict me from using this on Android, for example.) So does anybody have any pointers? Is this even possible? If not, will it be possible in Java 7?

    Read the article

  • HTML: Include, or exclude, optional closing tags?

    - by Ian Boyd
    Some HTML1 closing tags are optional, i.e.: </HTML> </HEAD> </BODY> </P> </DT> </DD> </LI> </OPTION> </THEAD> </TH> </TBODY> </TR> </TD> </TFOOT> </COLGROUP> Note: Not to be confused with closing tags that are forbidden to be included, i.e.: </IMG> </INPUT> </BR> </HR> </FRAME> </AREA> </BASE> </BASEFONT> </COL> </ISINDEX> </LINK> </META> </PARAM> Note: xhtml is different from HTML. xhtml is a form of xml, which requires every element have a closing tag. A closing tag can be forbidden in html, yet mandatory in xhtml. Are the optional closing tags ideally included, but we'll accept them if you forgot them, or ideally not included, but we'll accept them if you put them in In other words, should i include them, or should i not include them? The HTML 4.01 spec talks about closing element tags being optional, but doesn't say if it's preferable to include them, or preferable to not include them. On the other hand, a random article on DevGuru says: The ending tag is optional. However, it is recommended that it be included. The reason i ask is because you just know it's optional for compatibility reasons; and they would have made them (mandatory | forbidden) if they could have. Put it another way: What did HTML 1, 2, 3 do with regards to these, now optional, closing tags. What does HTML 5 do? And what should i do? Footnotes 1HTML 4.01

    Read the article

  • node.js callback getting unexpected value for variable

    - by defrex
    I have a for loop, and inside it a variable is assigned with var. Also inside the loop a method is called which requires a callback. Inside the callback function I'm using the variable from the loop. I would expect that it's value, inside the callback function, would be the same as it was outside the callback during that iteration of the loop. However, it always seems to be the value from the last iteration of the loop. Am I misunderstanding scope in JavaScript, or is there something else wrong? The program in question here is a node.js app that will monitor a working directory for changes and restart the server when it finds one. I'll include all of the code for the curious, but the important bit is the parse_file_list function. var posix = require('posix'); var sys = require('sys'); var server; var child_js_file = process.ARGV[2]; var current_dir = __filename.split('/'); current_dir = current_dir.slice(0, current_dir.length-1).join('/'); var start_server = function(){ server = process.createChildProcess('node', [child_js_file]); server.addListener("output", function(data){sys.puts(data);}); }; var restart_server = function(){ sys.puts('change discovered, restarting server'); server.close(); start_server(); }; var parse_file_list = function(dir, files){ for (var i=0;i<files.length;i++){ var file = dir+'/'+files[i]; sys.puts('file assigned: '+file); posix.stat(file).addCallback(function(stats){ sys.puts('stats returned: '+file); if (stats.isDirectory()) posix.readdir(file).addCallback(function(files){ parse_file_list(file, files); }); else if (stats.isFile()) process.watchFile(file, restart_server); }); } }; posix.readdir(current_dir).addCallback(function(files){ parse_file_list(current_dir, files); }); start_server(); The output from this is: file assigned: /home/defrex/code/node/ejs.js file assigned: /home/defrex/code/node/templates file assigned: /home/defrex/code/node/web file assigned: /home/defrex/code/node/server.js file assigned: /home/defrex/code/node/settings.js file assigned: /home/defrex/code/node/apps file assigned: /home/defrex/code/node/dev_server.js file assigned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js For those from the future: node.devserver.js

    Read the article

  • How can I have a Makefile automatically rebuild source files that include a modified header file? (I

    - by Nicholas Flynt
    I have the following makefile that I use to build a program (a kernel, actually) that I'm working on. Its from scratch and I'm learning about the process, so its not perfect, but I think its powerful enough at this point for my level of experience writing makefiles. AS = nasm CC = gcc LD = ld TARGET = core BUILD = build SOURCES = source INCLUDE = include ASM = assembly VPATH = $(SOURCES) CFLAGS = -Wall -O -fstrength-reduce -fomit-frame-pointer -finline-functions \ -nostdinc -fno-builtin -I $(INCLUDE) ASFLAGS = -f elf #CFILES = core.c consoleio.c system.c CFILES = $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.c))) SFILES = assembly/start.asm SOBJS = $(SFILES:.asm=.o) COBJS = $(CFILES:.c=.o) OBJS = $(SOBJS) $(COBJS) build : $(TARGET).img $(TARGET).img : $(TARGET).elf c:/python26/python.exe concat.py stage1 stage2 pad.bin core.elf floppy.img $(TARGET).elf : $(OBJS) $(LD) -T link.ld -o $@ $^ $(SOBJS) : $(SFILES) $(AS) $(ASFLAGS) $< -o $@ %.o: %.c @echo Compiling $<... $(CC) $(CFLAGS) -c -o $@ $< #Clean Script - Should clear out all .o files everywhere and all that. clean: -del *.img -del *.o -del assembly\*.o -del core.elf My main issue with this makefile is that when I modify a header file that one or more C files include, the C files aren't rebuilt. I can fix this quite easily by having all of my header files be dependencies for all of my C files, but that would effectively cause a complete rebuild of the project any time I changed/added a header file, which would not be very graceful. What I want is for only the C files that include the header file I change to be rebuilt, and for the entire project to be linked again. I can do the linking by causing all header files to be dependencies of the target, but I cannot figure out how to make the C files be invalidated when their included header files are newer. I've heard that GCC has some commands to make this possible (so the makefile can somehow figure out which files need to be rebuilt) but I can't for the life of me find an actual implementation example to look at. Can someone post a solution that will enable this behavior in a makefile? EDIT: I should clarify, I'm familiar with the concept of putting the individual targets in and having each target.o require the header files. That requires me to be editing the makefile every time I include a header file somewhere, which is a bit of a pain. I'm looking for a solution that can derive the header file dependencies on its own, which I'm fairly certain I've seen in other projects.

    Read the article

  • How to define an extern, C struct returning function in C++ using MSVC?

    - by DK
    The following source file will not compile with the MSVC compiler (v15.00.30729.01): /* stest.c */ #ifdef __cplusplus extern "C" { #endif struct Test; extern struct Test make_Test(int x); struct Test { int x; }; extern struct Test make_Test(int x) { struct Test r; r.x = x; return r; } #ifdef __cplusplus } #endif Compiling with cl /c /Tpstest.c produces the following error: stest.c(8) : error C2526: 'make_Test' : C linkage function cannot return C++ class 'Test' stest.c(6) : see declaration of 'Test' Compiling without /Tp (which tells cl to treat the file as C++) works fine. The file also compiles fine in DigitalMars C and GCC (from mingw) in both C and C++ modes. I also used -ansi -pedantic -Wall with GCC and it had no complaints. For reasons I will go into below, we need to compile this file as C++ for MSVC (not for the others), but with functions being compiled as C. In essence, we want a normal C compiler... except for about six lines. Is there a switch or attribute or something I can add that will allow this to work? The code in question (though not the above; that's just a reduced example) is being produced by a code generator. As part of this, we need to be able to generate floating point nans and infinities as constants (long story), meaning we have to compile with MSVC in C++ mode in order to actually do this. We only found one solution that works, and it only works in C++ mode. We're wrapping the code in extern "C" {...} because we want to control the mangling and calling convention so that we can interface with existing C code. ... also because I trust C++ compilers about as far as I could throw a smallish department store. I also tried wrapping just the reinterpret_cast line in extern "C++" {...}, but of course that doesn't work. Pity. There is a potential solution I found which requires reordering the declarations such that the full struct definition comes before the function foward decl., but this is very inconvenient due to the way the codegen is performed, so I'd really like to avoid having to go down that road if I can.

    Read the article

  • Cepstral Analysis for pitch detection

    - by Ohmu
    Hi! I'm looking to extract pitches from a sound signal. Someone on IRC just explain to me how taking a double FFT achieves this. Specifically: take FFT take log of square of absolute value (can be done with lookup table) take another FFT take absolute value I am attempting this using vDSP I can't understand how I didn't come across this technique earlier. I did a lot of hunting and asking questions; several weeks worth. More to the point, I can't understand why I didn't think of it. I am attempting to achieve this with vDSP library. it looks as though it has functions to handle all of these tasks. However, I'm wondering about the accuracy of the final result. I have previously used a technique which scours the frequency bins of a single FFT for local maxima. when it encounters one, it uses a cunning technique (the change in phase since the last FFT) to more accurately place the actual peak within the bin. I am worried that this precision will be lost with this technique I'm presenting here. I guess the technique could be used after the second FFT to get the fundamental accurately. But it kind of looks like the information is lost in step 2. as this is a potentially tricky process, could someone with some experience just look over what I'm doing and check it for sanity? also, I've heard there is an alternative technique involving fitting a quadratic over neighbouring bins. Is this of comparable accuracy? if so, I would favour it, as it doesn't involve remembering bin phases. so questions: does this approach makes sense? Can it be improved? I'm a bit worried about And the log square component; there seems to be a vDSP function to do exactly that: vDSP_vdbcon however, there is no indication it precalculates a log-table -- I assume it doesn't, as the FFT function requires an explicit pre-calculation function to be called and passed into it. and this function doesn't. Is there some danger of harmonics being picked up? is there any cunning way of making vDSP pull out the maxima, biggest first? Can anyone point me towards some research or literature on this technique? the main question: is it accurate enough? Can the accuracy be improved? I have just been told by an expert that the accuracy IS INDEED not sufficient. Is this the end of the line? Pi PS I get SO annoyed (npi) when I want to create tags, but cannot. :| I have suggested to the maintainers that SO keep track of attempted tags, but I'm sure I was ignored. we need tags for vDSP, accelerate framework, cepstral analysis

    Read the article

  • SQL/Schema comparison and upgrade

    - by Workshop Alex
    I have a simple situation. A large organisation is using several different versions of some (desktop) application and each version has it's own database structure. There are about 200 offices and each office will have it's own version, which can be one of 7 different ones. The company wants to upgrade all applications to the latest versions, which will be version 8. The problem is that they don't have a separate database for each version. Nor do they have a separate database for each office. They have one single database which is handled by a dedicated server, thus keeping things like management and backups easier. Every office has it's own database schema and within the schema there's the whole database structure for their specific application version. As a result, I'm dealing with 200 different schema's which need to be upgraded, each with 7 possible versions. Fortunately, every schema knows the proper version so checking the version isn't difficult. But my problem is that I need to create upgrade scripts which can upgrade from version 1 to version 2 to version 3 to etc... Basically, all schema's need to be bumped up one version until they're all version 8. Writing the code that will do this is no problem. the challenge is how to create the upgrade script from one version to the other? Preferably with some automated tool. I've examined RedGate's SQL Compare and Altova's DatabaseSpy but they're not practical. Altova is way too slow. RedGate requires too much processing afterwards, since the generated SQL Script still has a few errors and it refers to the schema name. Furthermore, the code needs to become part of a stored procedure and the code generated by RedGate doesn't really fit inside a single procedure. (Plus, it's doing too much transaction-handling, while I need everything within a single transaction. I have been considering using another SQL Comparison tool but it seems to me that my case is just too different from what standard tools can deliver. So I'm going to write my own comparison tool. To do this, I'll be using ADOX with Delphi to read the catalogues for every schema version in the database, then use this to write the SQL Statements that will need to upgrade these schema's to their next version. (Comparing 1 with 2, 2 with 3, 3 with 4, etc.) I'm not unfamiliar with generating SQL-Script-Generators so I don't expect too many problems. And I'll only be upgrading the table structures, not any of the other database objects. So, does anyone have some good tips and tricks to apply when doing this kind of comparisons? Things to be aware of? Practical tips to increase speed?

    Read the article

  • Passing Auth to API calls with Web Service References

    - by coffeeaddict
    I am new to web services. The last time I dealt with SOAP was when I created a bunch of wrapper classes that sent requests and received responses back per some response objects/classes I had created. So I had an object to send certain API requests and likewise a set of objects to hold the response back as an object so I could utilize that 3rd party API. Then someone came to me and said why not just use the wsdl and a web service. Ok, so today I went and created a "Service Reference". I see that this is what's called a "Proxy Class". You just instantiate an instance of this and then walla you have access to all the methods from the wsdl. But this leaves me with auth questions. Back when I created my own classes manually, I had a class which exposed properties that I would set then access for things like signature, username, password that got sent along with the Http request that were required by whatever 3rd party API I was using to make API calls. But then with using a Service Reference, how then would I pass this information just like I had done in my custom classes? For instance I'm going to be working with the PayPal API. It requires you to send a signature and a few other pieces of information like username and password. // Determins if API call needs to use a session based URI string requestURI = UseAuthURI == true ? _requestURIAuthBased + aSessionID : _requestURI; byte[] data = XmlUtil.DocumentToBytes(doc); // Create the atual Request instance HttpWebRequest request = CreateWebRequest(requestURI, data.Length); So how do I pass username, password, signature, etc. when using web service references for each method call? Is it as simple as specifying it as a param to the method or do you use the .Credentials and .URL methods of your proxy class object? It seems to me Credentials means windows credentials but I could be wrong. Is it limited to that or can you use that to specify those required header values that PayPal expects with each method call/API request?

    Read the article

  • Thin & Sinatra not taking port

    - by NekoNova
    I'm having problems settig up my application using Thin and Sinatra. I have created a development-config.ru file that contains the following settings: # This is a rack configuration file to fire up the Sinatra application. # This allows better control and configuration as we are using the modular # approach here for controlling our application. # # Extend the Ruby load path with the root of the API and the lib folder # so that we can automatically include all our own custom classes. This makes # the requiring of files a bit cleaner and easier to maintain. # This is basically what rails does as well. # We also store the root of the API in the ENV settings to ensure we have # always access to the root of the API when building paths. ENV['API_ROOT'] = File.dirname(__FILE__) $:.unshift ENV['API_ROOT'] $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'lib')) $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'db')) # Now we can require all the gems used for the entire API by simpling requiring # them here. We can also include the classes that we have defined inside the lib # folder. require 'rubygems' require 'bundler' # Run Bundler to setup our gems properly. This will install all the missing gems on # the system and ensure that the deployment environment is ready to run. Bundler.require # To make the loading easier for the application, we will now automatically load all # models that have been defined inside the lib folder. This ensures that we do not need # to load them anymore anywhere else in our application, as the models will be known to # ruby everywhere. Dir.glob(File.join(ENV['API_ROOT'], 'lib', '**', '*.rb')).each{|file| require file} # Now we will configure the Sinatra application so that we can fire up the entire API. # This requires some detailed settings like whether logging is allowed, the port to be # used and some folder locations. require 'sinatra' require 'app' set :logging, true set :dump_errors, true set :port, 3001 set :views, "#{ENV['API_ROOT']}/views" set :public_folder, "#{ENV['API_ROOT']}/public" set :environment, :test # Start up the Sinatra application with all the settings that we have defined. run App.new This is based upon the information I found on the Sinatra website. However, the problem is that I cannot get the application running on port 3001. If I use thin start -R development-config.ru it runs on port 3000. If I use rackup config-development.ru it runs on port 9696. However I never see Sinatra kick in or run over port 3000. My application looks like this: # Author : Arne De Herdt # Email : # This is the actuall application that will be running under Sinatra # to serve the requests for the billing middleware API. # We use the modular approach here to allow control when deploying # the application using Capistrano. require 'sinatra/base' require 'logger' require 'savon' require 'billcrux' class App < Sinatra::Base # This action responds to POST requests on the URI '/billcrux/register' # and is responsible for handeling registration requests with the # BillCrux payment system. # The post "/billcrux/register" do # do stuff end end Can someone tell me what I am doing wrong?

    Read the article

  • Best way to run remote VBScript in ASP.net? WMI or PsExec?

    - by envinyater
    I am doing some research to find out the best and most efficient method for this. I will need to execute remote scripts on a number of Window Servers/Computers (while we are building them). I have a web application that is going to automate this task, I currently have my prototype working to use PsExec to execute remote scripts. This requires PsExec to be installed on the system. A colleague suggested I should use WMI for this. I did some research in WMI and I couldn't find what I'm looking for. I want to either upload the script to the server and execute it and read the results, or already have the script on the server and execute it and read the results. I would prefer the first option though! Which is more ideal, PsExec or WMI? For reference, this is my prototype PsExec code. This script is only executing a small script to get the Windows OS and Service Pack Info. Protected Sub windowsScript(ByVal COMPUTERNAME As String) ' Create an array to store VBScript results Dim winVariables(2) As String nameLabel.Text = Name.Text ' Use PsExec to execute remote scripts Dim Proc As New System.Diagnostics.Process ' Run PsExec locally Proc.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") ' Pass in following arguments to PsExec Proc.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C c:\systemInfo.vbs" Proc.StartInfo.RedirectStandardInput = True Proc.StartInfo.RedirectStandardOutput = True Proc.StartInfo.UseShellExecute = False Proc.Start() ' Pause for script to run System.Threading.Thread.Sleep(1500) Proc.Close() System.Threading.Thread.Sleep(2500) 'Allows the system a chance to finish with the process. Dim filePath As String = COMPUTERNAME & "\TTO\somefile.txt" 'Download file created by script on Remote system to local system My.Computer.Network.DownloadFile(filePath, "C:\somefile.txt") System.Threading.Thread.Sleep(1000) ' Pause so file gets downloaded ''Import data from text file into variables textRead("C:\somefile.txt", winVariables) WindowsOSLbl.Text = winVariables(0).ToString() SvcPckLbl.Text = winVariables(1).ToString() System.Threading.Thread.Sleep(1000) ' ''Delete the file on server - we don't need it anymore Dim Proc2 As New System.Diagnostics.Process Proc2.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") Proc2.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C del c:\somefile.txt" Proc2.StartInfo.RedirectStandardInput = True Proc2.StartInfo.RedirectStandardOutput = True Proc2.StartInfo.UseShellExecute = False Proc2.Start() System.Threading.Thread.Sleep(500) Proc2.Close() ' Delete file locally File.Delete("C:\somefile.txt") End Sub

    Read the article

  • for loop with count from array, limit output? PHP

    - by Philip
    print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; $news_comments is a 3 diemensional array from mysqli_fetch_assoc returned from a function elsewhere, for some reason my for loop returns the total of the array sets such as [0][2] etc until it reaches the max amount from the counted $news_comments var which is a return function of LIMIT 10. my problem is if I add any text/html/icons inside the for loop it prints it in this case 11 times even though only array sets 1 and 2 have data inside them. How do I get around this? My function query is as follows: function news_comments() { require_once '../data/queries.php'; // get newsID from the url $urlID = $_GET['news_id']; // run our query for newsID information $news_comments = selectQuery('*', 'news_comments', 'WHERE news_id='.$urlID.'', 'ORDER BY comment_date', 'DESC', '10'); // requires 6 params // check query for results if(!$news_comments) { // loop error session and initiate var foreach($_SESSION['errors'] as $error=>$err) { print htmlentities($err) . 'for News Comments, be the first to leave a comment!'; } } else { print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; } }// End function Any help is greatly appreciated.

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >