Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 761/903 | < Previous Page | 757 758 759 760 761 762 763 764 765 766 767 768  | Next Page >

  • Problem with RVM and gem that has an executable

    - by djhworld
    Hi there, I've recently made the plunge to use RVM on Ubuntu. Everything seems to have gone swimmingly...except for one thing. I'm in the process of developing a gem of mine that has a script placed within its own bin/ directory, all of the gemspec and things were generated by Jeweler. The bin/mygem file contains the following code: - #!/usr/bin/env ruby begin require 'mygem' rescue LoadError require 'rubygems' require 'mygem' end app = MyGem::Application.new app.run That was working fine on the system version of Ruby. Now...recently I've moved to RVM to manage my ruby versions a bit better, except now my gem doesn't appear to be working. Firstly I do this: - rvm 1.9.2 Then I do this: - rvm 1.9.2 gem install mygem Which installs fine, except...when I try to run the command for mygem mygem I just get the following exception: - daniel@daniel-VirtualBox:~$ mygem <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- mygem (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from /home/daniel/.rvm/gems/ruby-1.9.2-p136/gems/mygem-0.1.4/bin/mygem:2:in `<top (required)>' from /home/daniel/.rvm/gems/ruby-1.9.2-p136/bin/mygem:19:in `load' from /home/daniel/.rvm/gems/ruby-1.9.2-p136/bin/mygem:19:in `<main>'mygem NOTE: I have a similar RVM setup on MAC OSX and my gem works fine there so I think this might be something to do with Ubuntu?

    Read the article

  • Use external datasource with NUnit's TestCaseAttribute

    - by Hamman359
    Is it possible to get the values for a TestCaseAttribute from an external data source such as an Excel Spreadsheet, CSV file or Database? i.e. Have a .csv file with 1 row of data per test case and pass that data to NUnit one at a time. Here's the specific situation that I'd like to use this for. I'm currently merging some features from one system into another. This is pretty much just a copy and paste process from the old system into the new one. Unfortunately, the code being moved not only does not have any tests, but is not written in a testable manner (i.e. tightly coupled with the database and other code.) Taking the time to make the code testable isn't really possible since its a big mess, i'm on a tight schedule and the entire feature is scheduled to be re-written from the ground up in the next 6-9 months. However, since I don't like the idea of not having any tests around the code, I'm going to create some simple Selenium tests using WebDriver to test the page through the UI. While this is not ideal, it's better than nothing. The page in question has about 10 input values and about 20 values that I need to assert against after the calculations are completed, with about 30 valid combinations of values that I'd like to test. I already have the data in a spreadsheet so it'd be nice to simply be able to pull that out rather than having to re-type it all in Visual Studio.

    Read the article

  • Compiling and using NTL c++ library for Windows

    - by Martin Lauridsen
    Hi there, I have compiled the NTL inifite precision integer arithmetic library for c++, using Microsoft Visual Studio 2008. I did as explained, on this site, using the Visual Studio interface, rather than from the command prompt. Actually I would rather do it from the command prompt, but I was not sure how to. Anyhow, I got the library compiled, and I now want to compile a program using the library, from the command prompt. The program I am trying to compile, has been tested on a linux system, where I compile it with the following c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include mpqs.cpp main.cpp -o main -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm Nevermind the gmp stuff, I dont have that installed on Windows. It is purely an optional thing that will make the NTL run faster. Anyhow, this works fine on linux. Now on Windows I write the following cl /EHsc /I D:\Downloads\WinNTL-5_5_2\include mpqs.cpp main.cpp /link /LIBPATH:"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" But this results in the following errors: mpqs.cpp mpqs.cpp(38) : error C2039: 'find_smooth_vals' : is not a member of 'QS' d:\desktop\qs\mpqs.h(12) : see declaration of 'QS' mpqs.cpp(41) : error C2065: 'M' : undeclared identifier mpqs.cpp(41) : error C2065: 'n' : undeclared identifier mpqs.cpp(42) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(42) : error C2228: left of '.size' must have class/struct/union type is ''unknown-type'' mpqs.cpp(43) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(44) : error C2065: 'qx_table' : undeclared identifier mpqs.cpp(44) : error C3861: 'test_smoothness': identifier not found mpqs.cpp(45) : error C2065: 'smooth_indices' : undeclared identifier mpqs.cpp(45) : error C2228: left of '.push_back' must have class/struct/union type is ''unknown-type'' main.cpp Generating Code... It is as if, my mpqs.h file is not included into the compilation process? Also I dont understand why it complains about .push_back() for a vector type? Help is much appreciated!

    Read the article

  • "filter" json through ajax (jquery)

    - by Hyung Suh
    I'm trying to filter the json array through ajax and not sure how to do so. { posts: [{"image":"images/bbtv.jpg", "group":"a"}, {"image":"images/grow.jpg", "group":"b"}, {"image":"images/tabs.jpg", "group":"a"}, {"image":"images/bia.jpg", "group":"b"}]} i want it so that i can only show items in group A or group B. how would i have to change my ajax to filter through the content? $.ajax({ type: "GET", url: "category/all.js", dataType: "json", cache: false, contentType: "application/json", success: function(data) { $('#folio').html("<ul/>"); $.each(data.posts, function(i,post){ $('#folio ul').append('<li><div class="boxgrid captionfull"><img src="' + post.image + '" /></div></li>'); }); initBinding(); }, error: function(xhr, status, error) { alert(xhr.status); } }); Also, how can I can I make each link process the filter? <a href="category/all.js">Group A</a> <a href="category/all.js">Group B</a> Sorry for all these questions, can't seem to find a solution.. Any help in the right direction would be appreciated. Thanks!

    Read the article

  • Monitor file selection in explorer (like clipboard monitoring) in C#

    - by Christian
    Hi, I am trying to create a little helper application, one scenario is "file duplication finder". What I want to do is this: I start my C# .NET app, it gives me an empty list. Start the normal windows explorer, select a file in some folder The C# app tells me stuff about this file (e.g. duplicates) How can I monitor the currently selected file in the "normal" windows explorer instance. Do I have to start the instance using .NET to have a handle of the process. Do I need a handle, or is there some "global hook" I can monitor inside C#. Its a little bit like monitoring the clipboard, but not exactly the same... Any help is appreciated (if you don't have code, just point me to the right interops, dlls or help pages :-) Thanks, Chris EDIT 1 (current source, thanks to Mattias) using SHDocVw; using Shell32; public static void ListExplorerWindows() { foreach (InternetExplorer ie in new ShellWindowsClass()) DebugExplorerInstance(ie); } public static void DebugExplorerInstance(InternetExplorer instance) { Debug.WriteLine("DebugExplorerInstance ".PadRight(30, '=')); Debug.WriteLine("FullName " + instance.FullName); Debug.WriteLine("AdressBar " + instance.AddressBar); var doc = instance.Document as IShellFolderViewDual ; if (doc != null) { Debug.WriteLine(doc.Folder.Title); foreach (FolderItem item in doc.SelectedItems()) { Debug.WriteLine(item.Path); } } }

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • What is the 'page lifecycle' of an ASP.NET MVC page, compared to ASP.NET WebForms?

    - by Simon
    What is the 'page lifecycle' of an ASP.NET MVC page, compared to ASP.NET WebForms? I'm tryin to better understand this 'simple' question in order to determine whether or not existing pages I have in a (very) simple site can be easily converted from ASP.NET WebForms. Either a 'conversion' of the process below, or an alternative lifecycle would be what I'm looking for. What I'm currently doing: (yes i know that anyone capable of answering my question already knows all this -- i'm just tryin to get a comparison of the 'lifecycle' so i thought i'd start by filling in what we already all know) Rendering the page: I have a master page which contains my basic template I have content pages that give me named regions from the master page into which I put content. In an event handler for each content page I load data from the database (mostly read-only). I bind this data to ASP.NET controls representing grids, dropdowns or repeaters. This data all 'lives' inside the HTML generated. Some of it gets into ViewState (but I wont go into that too much!) I set properties or bind data to certain items like Image or TextBox controls on the page. The page gets sent to the client rendered as non-reusable HTML. I try to avoid using ViewState other than what the page needs as a minimum. Client side (not using ASP.NET AJAX): I may use JQuery and some nasty tricks to find controls on the page and perform operations on them. If the user selects from a dropdown -- a postback is generated which triggers a C# event in my codebehind. This event may go to the database, but whatever it does a completely newly generated HTML page ends up getting sent back to the client. I may use Page.Session to store key value pairs I need to reuse later So with MVC how does this 'lifecycle' change?

    Read the article

  • How can a program be detected as running?

    - by ryeguy
    I have written a program that is sort of an unofficial, standalone plugin for an application. It allows customers to get a service that is a lower priced alternative then the vendor-owned one. My program is not illegal, against any kind of TOS, and is certainly not a virus, adware, or anything like that. That being said, the vendor of course is not happy about me taking his competition, and is trying to block my application from running. He has already tried some tactics to stop people from running my app alongside his. He makes it so if it is detected, his app throws a fake error. First, he checked to see if my program was running by looking for an open window with the right title. I countered this by randomizing the program title at startup. Next, he looked for the running process name. I countered this by making the app copy itself when it is started as [random string].exe and then running that. Anyways, my question is this: what else can he do to detect if my program running? I know that you can read window text (ie status bar, labels). I'm prepared to counter this by replacing the labels with images (ugh, any other way?). But what else is there? Can you detect what .dlls a program has loaded? If so, could this be solved by randomizing the dll names before loading them? I know that it's possible to get a program's signature in memory and track it that way (like a virus scanner), but the chances of him doing that probably aren't good because that sounds pretty advanced. Even though this is kinda crappy of him to be doing, its kind of fun. It's like a nerdy fist fight. EDIT: When I said it's a plugin, that is just the (incorrect) term I used. It's a standalone EXE. The "API" between my program and the other is mine is simply entering data into the controls (like textboxes, etc).

    Read the article

  • Run AppleScript with Elevated Privileges from Objective C

    - by cygnl7
    I'm attempting to execute an uninstaller (written in AppleScript) through AuthorizationExecuteWithPrivileges. I'm setting up my rights after creating an empty auth ref like so: char *tool = "/usr/bin/osascript"; AuthorizationItem items = {kAuthorizationRightExecute, strlen(tool), tool, 0}; AuthorizationRights rights = {sizeof(items)/sizeof(AuthorizationItem), &items}; AuthorizationFlags flags = kAuthorizationFlagDefaults | kAuthorizationFlagExtendRights | kAuthorizationFlagPreAuthorize | kAuthorizationFlagInteractionAllowed; status = AuthorizationCopyRights(authorizationRef, &rights, NULL, flags, NULL); Later I call: status = AuthorizationExecuteWithPrivileges(authorizationRef, tool, kAuthorizationFlagDefaults, (char *const *)args, NULL); On Snow Leopard this works fine, but on Leopard I get the following in syslog.log: Apr 19 15:30:09 hostname /usr/bin/osascript[39226]: OpenScripting.framework - 'gdut' event blocked in process with mixed credentials (issetugid=0 uid=501 euid=0 gid=20 egid=20) Apr 19 15:30:12: --- last message repeated 1 time --- ... Apr 19 15:30:12 hostname [0x0-0x2e92e9].com.example.uninstaller[39219]: /var/folders/vm/vmkIi0nYG8mHMrllaXaTgk+++TI/-Tmp-/TestApp_tmpfiles/Uninstall.scpt: Apr 19 15:30:12 hostname [0x0-0x2e92e9].com.example.uninstaller[39219]: execution error: «constant afdmasup» doesn’t understand the «event earsffdr» message. (-1708) After researching this for a few hours my first guess is that Leopard somehow doesn't want to do what I'm doing because it knows it's in a setuid situation and blocks calls that ask about user-specific things in the applescript. Am I going about this all wrong? I just want to run the equivalent of "sudo /usr/bin/osascript ..." Edit: FWIW, the first line that causes the "execution error" is: set userAppSupportPath to (POSIX path of (path to application support folder from user domain)) However, even with an empty script (on run argv, end run and that's it) I still get the 'gdut' message.

    Read the article

  • Converting an int64 value to a Number object in JavaScript

    - by Matt
    I have a COM object which has a method that returns an unsigned int64 (VT_UI8) value. We have an HTML page which contains some JavaScript which can load the COM object and make the call to that method, to retrieve the value as such: var foo = MyCOMObject.GetInt64Value(); This value can easily be displayed to the user in a message dialog using: alert(foo); or displayed on the page by: document.getElementById('displayToUser').innerHTML = foo; However, we cannot use this value as a Number (e.g. if we try to multiply it by 2) without the page throwing "Number expected" errors. If we check "typeof(foo)" it returns "unknown". I've found a workaround for this by doing the following: document.getElementById('displayToUser').innerHTML = foo; var bar = parseInt(document.getElementById('displayToUser').innerHTML); alert(bar*2); What I need to know is how to make that process more efficient. Specifically, is there a way to cast foo to a String explicitly, rather than having to set some document element's innerHTML to foo and then retrieve it from that. I wouldn't mind calling something like: alert(parseInt((string)foo) * 2); Even better would be if there is a way to directly convert the int64 to a Number, without going through the String conversion, but I hold out less hope for that.

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

  • Autocomplete Error Question - Ruby on Rails

    - by bgadoci
    I have built a very simple blog application using Ruby on Rails. New to both Ruby and Rails so excuse the stupid questions. I currently have two tables that relate to this question. I have a Post table and a Tag table. Basically I set it up such that Post has_many :tags and Tag belongs_to :post. I am using AJAX to process and display the tags in the show view of the post. I installed the auto_complete plugin and I am getting an error when I enter the letters in the text_field_with_auto_complete for tag creation. My suspicion is this is because the form is a remote_form_for or something I am doing wrong in the routes.rb. Here is the error and code: Error Processing PostsController#show (for 127.0.0.1 at 2010-04-13 23:25:46) [GET] Parameters: {"tag"=>{"tag_name"=>"f"}, "id"=>"auto_complete_for_tag_tag_name"} Post Load (0.1ms) SELECT * FROM "posts" WHERE ("posts"."id" = 0) ActiveRecord::RecordNotFound (Couldn't find Post with ID=auto_complete_for_tag_tag_name): app/controllers/posts_controller.rb:22:in `show' Rendered rescues/_trace (26.0ms) Rendered rescues/_request_and_response (0.2ms) Rendering rescues/layout (not_found) remote_form_for located in /views/posts/show.html.erb <% remote_form_for [@post, Tag.new] do |f| %> <p> <%= f.label :tag_name, "Tag" %><br/> <%= text_field_with_auto_complete :tag, :tag_name, {}, {:method => :get} %> </p> <p><%= f.submit "Add Comment" %></p> <% end %> tags_controller.rb (I'll spare you all the actions but added the following here) auto_complete_for :tag, :tag_name routes.rb map.resources :posts, :has_many => :comments map.resources :posts, :has_many => :tags map.resources :tags, :collection => {:auto_complete_for_tag_tag_name => :get }

    Read the article

  • modifying ajax returned data before showing

    - by Nick
    I'm processing subscribtion form with jQuery/ajax and need to display results with success function (they are different depending on if email exists in database). But the trick is I do not need h2 and first "p" tag. How can I show only div#alertmsg and second "p" tag? I've tried revoming unnecessary elements with method described here, but it didn't work. Thanks in advance. Here is my code: var dataString = $('#subscribe').serialize(); $.ajax({ type: "POST", url: "/templates/willrock/pommo/user/process.php", data: dataString, success: function(data){ var result = $(data); $('#success').append(result); } Here is the data returned: <h2>Subscription Review</h2> <p><a href="url" onClick="history.back(); return false;"><img src="/templates/willrock/pommo/themes/shared/images/icons/back.png" alt="back icon" class="navimage" /> Back to Subscription Form</a></p> <div id="alertmsg" class="error"> <ul> <li>Email address already exists. Duplicates are not allowed.</li> </ul> </div> <p><a href="login.php">Update your records</a></p>

    Read the article

  • Dragging an UIView inside UIScrollView

    - by Sergey Mikhanov
    Hello community! I am trying to solve a basic problem with drag and drop on iPhone. Here's my setup: I have a UIScrollView which has one large content subview (I'm able to scroll and zoom it) Content subview has several small tiles as subviews that should be dragged around inside it. My UIScrollView subclass has this method: - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event { UIView *tile = [contentView pointInsideTiles:[self convertPoint:point toView:contentView] withEvent:event]; if (tile) { return tile; } else { return [super hitTest:point withEvent:event]; } } Content subview has this method: - (UIView *)pointInsideTiles:(CGPoint)point withEvent:(UIEvent *)event { for (TileView *tile in tiles) { if ([tile pointInside:[self convertPoint:point toView:tile] withEvent:event]) return tile; } return nil; } And tile view has this method: - (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:self.superview]; self.center = location; } This works, but not fully correct: the tile sometimes "falls down" during the drag process. More precisely, it stops receiving touchesMoved: invocations, and scroll view starts scrolling instead. I noticed that this depends on the drag speed: the faster I drag, the quicker the tile "falls". Any ideas on how to keep the tile glued to the dragging finger? Thanks in advance!

    Read the article

  • Drupal: Template Files, Modules and Content Types for Advanced Theme

    - by theandym
    Intro I am in the process of trying to convert my first HTML/CSS design into a theme for Drupal. I have used ModX for quite a few designs and appreciate the ability to create different page templates and custom variables to be assigned to those templates. However I seem to be having some issues making the transition. The site I am working on theming in Drupal is for a real estate agent. Each page/section will have a different set of content associated with it and will need to display only that content. For example, there will be a page for current listings, each of which will be formatted by a custom content type. However, when I call the content on the home page (or on other pages) I do not want to see this listing data. Layout The layout of the site and the regions associated with each page/section is as follows: Home Spotlight Featured 1 Featured 2 About Spotlight Bios - Profiles of each agent (each will be a node with name, contact info, pic, etc) listed on the page; multiple nodes listed Sidebar Listings Spotlight Listings - Profiles of properties (each will be a node with locations, basic info, pic, etc) listed on the page; multiple nodes listed Sidebar Services Spotlight Content - general paragraph text area Sidebar News/Blog News/Blog Items - List of stories with summaries and links to full article Sidebar Each page/section will use the same header and footer. Issue I have done some reading on Drupal, custom content types (and CCK), Views, and Pathauto. However I have not been able to get a clear picture of how to put it all together to accomplish what I am attempting. What I really would like to know is which modules to use, how best to use them, which elements I need to use where, and what template files I should be using to theme the elements I need to use. Any help or reference to useful resources would be much appreciated.

    Read the article

  • global variables in php not working as expected

    - by Josh Smeaton
    I'm having trouble with global variables in php. I have a $screen var set in one file, which requires another file that calls an initSession() defined in yet another file. The initSession() declares "global $screen" and then processes $screen further down using the value set in the very first script. How is this possible? To make things more confusing, if you try to set $screen again then call the initSession(), it uses the value first used once again. The following code will describe the process. Could someone have a go at explaining this? $screen = "list1.inc"; // From model.php require "controller.php"; // From model.php initSession(); // From controller.php global $screen; // From Include.Session.inc echo $screen; // prints "list1.inc" // From anywhere $screen = "delete1.inc"; // From model2.php require "controller2.php" initSession(); global $screen; echo $screen; // prints "list1.inc" Update: If I declare $screen global again just before requiring the second model, $screen is updated properly for the initSession() method. Strange.

    Read the article

  • grep 5 seconds of input from the serial port inside a shell-script

    - by pica
    I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like: ttylog -b 115200 -d /dev/ttyS0 I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example: touch tempFile ttylog -b 115200 -d /dev/ttyS0 >> tempFile & serialPID=$! sleep 5 #kill ${serialPID} #does not work, gets wrong PID killall ttylog cat tempFile The file gets created but never filled with any data. I can also replace the ttylog line with: ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile & In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh). I have no idea what's going on here. It seems to be a failure of redirection within my script. Am I on the right track? Is there a better way to sample 5 seconds of the serial port?

    Read the article

  • java class creation dynamically and make it accessible across the network different jvms i.e. serial

    - by inj.rav
    Hi. I have a requirement of creating java classes dynamically and make it accessible different jvms across the network. I tried to use reflection and javassist tool,but nothing worked. Let me explain the scenario we are using Coherence distributed cache. It has a power of doing aggregation/filtering in parallel across the cluster. For example if a class has [dynamic class] has amount variable and getAmount/setAmount methods. Then if we execute COHERENCE queries, it will start process in parallel across the cluster. I tried to create classes at run time by using javassist and reflection. I am able to access it from single JVM, but when I tried to access the same class from other jvm [through coherence cluster]. I am getting exception of class not found [as remote jvm is not having idea of this class].I can over come this by creating same class dynamically on remote jvm also and access the methods. But coherence in built methods/functions are not able to find the class. could some one help me on this matter

    Read the article

  • Practices for keeping JavaScript and CSS in sync?

    - by Rene Saarsoo
    I'm working on a large JavaScript-heavy app. Several pieces of JavaScript have some related CSS rules. Our current practice is for each JavaScript file to have an optional related CSS file, like so: MyComponent.js // Adds CSS class "my-comp" to div MyComponent.css // Defines .my-comp { color: green } This way I know that all CSS related to MyComponent.js will be in MyComponent.css. But the thing is, I all too often have very little CSS in those files. And all too often I feel that it's too much effort to create a whole file to just contain few lines of CSS - it would be easier to just hardcode the styles inside JavaScript. But this would be the path to the dark side... Lately I've been thinking of embedding the CSS directly inside JavaScript - so it could still be extracted in the build process and merged into one large CSS file. This way I wouldn't have to create a new file for every little CSS-piece. Additionally when I move/rename/delete the JavaScript file I don't have to additionally move/rename/delete the CSS file. But how to embed CSS inside JavaScript? In most other languages I would just use string, but JavaScript has some issues with multiline strings. The following looks IMHO quite ugly: Page.addCSS("\ .my-comp > p {\ font-weight: bold;\ color: green;\ }\ "); What other practices have you for keeping your JavaScript and CSS in sync?

    Read the article

  • Parsing email with Python

    - by Manuel Ceron
    I'm writing a Python script to process emails returned from Procmail. As suggested in this question, I'm using the following Procmail config: :0: |$HOME/process_mail.py My process_mail.py script is receiving an email via stdin like this: From hostname Tue Jun 15 21:43:30 2010 Received: (qmail 8580 invoked from network); 15 Jun 2010 21:43:22 -0400 Received: from mail-fx0-f44.google.com (209.85.161.44) by ip-73-187-35-131.ip.secureserver.net with SMTP; 15 Jun 2010 21:43:22 -0400 Received: by fxm19 with SMTP id 19so170709fxm.3 for <[email protected]>; Tue, 15 Jun 2010 18:47:33 -0700 (PDT) MIME-Version: 1.0 Received: by 10.103.84.1 with SMTP id m1mr2774225mul.26.1276652853684; Tue, 15 Jun 2010 18:47:33 -0700 (PDT) Received: by 10.123.143.4 with HTTP; Tue, 15 Jun 2010 18:47:33 -0700 (PDT) Date: Tue, 15 Jun 2010 20:47:33 -0500 Message-ID: <[email protected]> Subject: TEST 12 From: Full Name <[email protected]> To: [email protected] Content-Type: text/plain; charset=ISO-8859-1 ONE TWO THREE I'm trying to parse the message in this way: >>> import email >>> msg = email.message_from_string(full_message) I want to get message fields like 'From', 'To' and 'Subject'. However, the message object does not contain any of these fields. What am I doing wrong?

    Read the article

  • Microsoft Detours - DetourUpdateThread?

    - by pault543
    Hi, I have a few quick questions about the Microsoft Detours Library. I have used it before (successfully), but I just had a thought about this function: LONG DetourUpdateThread(HANDLE hThread); I read elsewhere that this function will actually suspend the thread until the transaction completes. This seems odd since most sample code calls: DetourUpdateThread(GetCurrentThread()); Anyway, apparently this function "enlists" threads so that, when the transaction commits (and the detours are made), their instruction pointers are modified if they lie "within the rewritten code in either the target function or the trampoline function." My questions are: When the transaction commits, is the current thread's instruction pointer going to be within the DetourTransactionCommit function? If so, why should we bother enlisting it to be updated? Also, if the enlisted threads are suspended, how can the current thread continue executing (given that most sample code calls DetourUpdateThread(GetCurrentThread());)? Finally, could you suspend all threads for the current process, avoiding race conditions (considering that threads could be getting created and destroyed at any time)? Perhaps this is done when the transaction begins? This would allow us to enumerate threads more safely (as it seems less likely that new threads could be created), although what about CreateRemoteThread()? Thanks, Paul For reference, here is an extract from the simple sample: // DllMain function attaches and detaches the TimedSleep detour to the // Sleep target function. The Sleep target function is referred to // through the TrueSleep target pointer. BOOL WINAPI DllMain(HINSTANCE hinst, DWORD dwReason, LPVOID reserved) { if (dwReason == DLL_PROCESS_ATTACH) { DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourAttach(&(PVOID&)TrueSleep, TimedSleep); DetourTransactionCommit(); } else if (dwReason == DLL_PROCESS_DETACH) { DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourDetach(&(PVOID&)TrueSleep, TimedSleep); DetourTransactionCommit(); } return TRUE; }

    Read the article

  • Using MSBuild 4 command line to publish ASP.NET web application

    - by meandmycode
    In previous msbuild we used the target '_CopyWebApplication' in order to build and convert the source of a project into a published site, this worked OK, but wasn't ideal. In .NET 4, the publishing process is somewhat more sophisticated and additionally seems a bit of a black box to understand. Whilst packages look great, I cannot fully understand how they can be harnessed by a build server, the build server would not get any manifest information, and equally, something (msbuild?) is CREATING this manifest information FROM the project file. In our build server, I ideally want to say, here is my csproj file, deploy it by the package configuration 'x'. I'm trying to understand the workflow I need to make this happen. Right now when I use _CopyWebApplication, the result is different to doing a publish from visual studio 2010, primarily that web.config transforms aren't processed, and obviously msdeploy isn't involved at all. Can somebody point me in the right direction, I believe I need to get msbuild to do the equiv of 'Build Deployment Package', and then use msdeploy to deploy this from our build server to our CI testing environments. I know this is a very vague post, but I hope somebody can give me some hints, I'll be continuing research also, so if I make any progress, I'll post my findings here. Thanks in advance, Stephen.

    Read the article

  • How to access "Custom" or non-System TFS workitem fields using PowerShell?

    - by DaBozUK
    When using PowerShell to extract information from TFS, I find that I can get at the standard fields but not "Custom" fields. I'm not sure custom is the correct term, but for example if I look at the Process Editor in VS2008 and edit the Work Item type, there are fields such as listed below, with Name, Type and RefName: Title String System.Title State String System.State Rev Integer System.Rev Changed By String System.ChangedBy I can access these with Get-TfsItemHistory: Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table Title, State, Rev, ChangedBy -Auto So far so good. However, there are also some other fields in the WorkItem type, which I'm calling "Custom" or non-System fields, e.g.: Activated By String Microsoft.VSTS.Common.ActivatedBy Resolved By String Microsoft.VSTS.Common.ResolvedBy And the following command does not retrieve the data, just spaces. Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table ActivatedBy, ResolvedBy -Auto I've also tried the names in quotes, the fully qualified refname, but no luck. How do you access these "non-System" fields? Thanks Boz UPDATE: From Keith's answer I can get the fields I need: Get-TfsItemHistory "$/Hermes/Main" -Version "D01/12/10~" -Recurse ` | Select ChangeSetId, Comment -exp WorkItems ` | Select ChangeSetId, Comment, @{n='WI-Id'; e={$_.Id}}, Title -exp Fields ` | Where {$_.ReferenceName -eq 'Microsoft.VSTS.Common.ResolvedBy'} ` | Format-Table ChangesetId, Comment, WI-Id, Title, @{n='Resolved By'; e={$_.Value}} -Auto

    Read the article

  • call lynx from jsp script

    - by Piero
    Hi, I have an execute(String cmd) in a jsp script that calls the exec method from the Runtime class. It works when I call a local command, like a php script stored on the server. for example: /usr/bin/php /path/to/php/script arg1 arg2 So I guess my execute command is ok, since it is working with that. Now when I try to call lynx, the text-based web browser, it does not work. If I call it in a terminal, it works fine: /usr/bin/lynx -dump -accept_all_cookies 'http://www.someurl.net/?arg1=1&arg2=2' But when I call this from my execute command, nothing happens... Any idea why? This is my execute method: public String execute(String cmd){ Runtime r = Runtime.getRuntime(); Process p = null; String res = ""; try { p = r.exec(cmd); InputStreamReader isr = new InputStreamReader(p.getInputStream()); BufferedReader br = new BufferedReader(isr); String line = null; //out.println(res); while ((line = br.readLine()) != null) { res += line; } p.waitFor(); } catch (Exception e) { res += e; } System.out.println(p.exitValue()); return res; }

    Read the article

< Previous Page | 757 758 759 760 761 762 763 764 765 766 767 768  | Next Page >