Search Results

Search found 7722 results on 309 pages for 'pitfalls to avoid'.

Page 243/309 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • how do I make a portable isnan/isinf function.

    - by monkeyking
    I've been using isinf,isnan functions on linux platforms which worked perfectly. But this didn't work on osx, so I decided to use std::isinf std::isnan which works on both linux and osx. But the intel compiler doesn't recognize it, and I guess its a bug in the intel compiler according to http://software.intel.com/en-us/forums/showthread.php?t=64188 So now I just want to avoid the hassle and define my own isinf,isnan implementation. Does anyone know how this could be done Thanks edit: I ended up doing this in my sourcecode for making isinf/isnan working #include <iostream> #include <cmath> #ifdef __INTEL_COMPILER #include <mathimf.h> #endif int isnan_local(double x) { #ifdef __INTEL_COMPILER return isnan(x); #else return std::isnan(x); #endif } int isinf_local(double x) { #ifdef __INTEL_COMPILER return isinf(x); #else return std::isinf(x); #endif } int myChk(double a){ std::cerr<<"val is: "<<a <<"\t"; if(isnan_local(a)) std::cerr<<"program says isnan"; if(isinf_local(a)) std::cerr<<"program says isinf"; std::cerr<<"\n"; return 0; } int main(){ double a = 0; myChk(a); myChk(log(a)); myChk(-log(a)); myChk(0/log(a)); myChk(log(a)/log(a)); return 0; }

    Read the article

  • How to determine if a registry key is redirected by WOW64?

    - by Luke
    Is it possible to determine whether or not a given registry key is redirected? My problem is that I want to enumerate registry keys in both the 32-bit and 64-bit registry views in a generic manner from a 32-bit application. I could simply open each key twice, first with KEY_WOW64_64KEY and then with KEY_WOW64_32KEY. However, if the key is not redirected this gives you exactly the same key and you end up enumerating the exact same content twice; this is what I am trying to avoid. I did find some documentation on it, but it looks like the only way is to examine the hive and do a bunch of string comparisons on the key. Another possibility I thought of is to try to open Wow6432Node on each subkey; if it exists then the key must be redirected. I.e. if I am trying to open HKCU\Software\Microsoft\Windows I would try to open the following keys: HKCU\Wow6432Node, HKCU\Software\Wow6432Node, HKCU\Software\Microsoft\Wow6432Node, and HKCU\Software\Microsoft\Windows\Wow6432Node. Unfortunately, the documentation seems to imply that a child of a redirected key is not necessarily redirected so that route also has issues. So, what are my options here?

    Read the article

  • Altruistic network connection bandwidth estimation

    - by datenwolf
    Assume two peers Alice and Bob connected over a IP network. Alice and Bob are exchanging packets of lossy compressed data which are generated and to be consumes in real time (think a VoIP or video chat application). The service is designed to cope with as little bandwidth available, but relies on low latencies. Alice and Bob would mark their connection with an apropriate QoS profile. Alice and Bob want use a variable bitrate compression and would like to consume all of the leftover bandwidth available for the connection between them, but would voluntarily reduce the consumed bitrate depending on the state of the network. However they'd like to retain a stable link, i.e. avoid interruptions in their decoded data stream caused by congestion and the delay until the bandwidth got adjusted. However it is perfectly possible for them to loose a few packets. TL;DR: Alice and Bob want to implement a VoIP protocol from scratch, and are curious about bandwidth and congestion control. What papers and resources do you suggest for Alice and Bob to read? Mainly in the area of bandwidth estimation and congestion control.

    Read the article

  • MVC2 Areas not Registering Correctly

    - by Geoffrey
    I believe I have my Areas setup correctly (they were working in MVC1 fine). I followed the guide here: http://odetocode.com/Blogs/scott/archive/2009/10/13/asp-net-mvc2-preview-2-areas-and-routes.aspx I've also used Haacked's Route Debugger. Which shows the correct Area/Controller registration when I run it. However when I try to go to my (AreaName)/(Controller) I get this error: "The resource cannot be found." I believe this indicates there's a problem with the routing, but I'm having trouble debugging this. Where should I set my breakpoints to catch routing errors in MVC2? I'm also using SparkViewEngine compiled against MVC2 references. Could this possibly be causing this error? I've set breakpoints on the controller in the area and it never fires off, I assumed the view engine doesn't kick in until after the controller has been initiated, but I could be wrong. The non-area landing page works fine, and I've stripped my project of all areas except one, to avoid any sort of naming conflicts. Any ideas where I can try to look next?

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • EF Forced Concurrency Checks

    - by Imran
    Hi, I have an issue with EF 4.0 that I hope someone can help with. I currently have an entity that I want to update in a last in wins fashion (i.e. ignore concurrency checks and just overwrite whats in the db with what is submitted). It seems Entity Framework not only includes the primary key of the entity in the where clause of the generated sql, but also any foreign key fields. This is annoying as it means that I don't get true last in wins semantics and need to know what value the fk field had before the update or I get a concurrency exception. I am aware that this can be short circuited by including a foreign key field as well as the navigation property on the entity. I would like to avoid this if possible as it's not a very clean solution. I was just wondering if there was any other way to override this behaviour? It seems like more of a bug than a feature. I have no problem with ef doing concurrency checks if I instruct it to do so but not being able to bypass concurrency completely is a bit of a hindrance as there are many valid scenarios where this is not needed

    Read the article

  • jquery timeout function not working properly

    - by 3gwebtrain
    HI, i ma using the settimeout function to display block and append to 'li', once the mouseover. and i just want to remove the block and make it none. in my funcation works fine. but problem is even just my mouse cross the li, it self the block getting visibile. how to avoid this? my code is: var thisLi; var storedTimeoutID; $("ul.redwood-user li,ul.user-list li").live("mouseover", function(){ thisLi = $(this); var needShow = thisLi.children('a.copier-link'); if($(needShow).is(':hidden')){ storedTimeoutID = setTimeout(function(){ $(thisLi).children('a.copier-link').appendTo(thisLi).show(); },3000); } else { storedTimeoutID = setTimeout(function(){ $(thisLi).siblings().children('a.copier-link').appendTo(thisLi).show(); },3000); } }); $("ul.redwood-user li,ul.user-list li").live("mouseleave", function(){ clearTimeout(storedTimeoutID); //$('ul.redwood-user li').children('a.copier-link').hide(); $('ul.user-list li').children('a.copier-link').hide(); });

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • Sql Serve - Cascade delete has multiple paths

    - by Anders Juul
    Hi all, I have two tables, Results and ComparedResults. ComparedResults has two columns which reference the primary key of the Results table. My problem is that if a record in Results is deleted, I wish to delete all records in ComparedResults which reference the deleted record, regardless of whether it's one column or the other (and the columns may reference the same Results row). A row in Results may deleted directly or through cascade delete caused by deleting in a third table. Googling this could indicate that I need to disable cascade delete and rewrite all cascade deletes to use triggers instead. Is that REALLY nessesary? I'd be prepared to do much restructuring of the database to avoid this, as my main area is OO programming, and databases should 'just work'. It is hard to see, however, how a restructuring could help as I would just move the problem around... Or am I missing something? I am also a bit at a loss as to why my initial construct should even be a problem for the Sql Server?! Any comments welcome and much appreciated! Anders, Denmark

    Read the article

  • Can I make any ASP.NET/HTML element into form-data that posts back to the server?

    - by Giffyguy
    I am using Javascript to alter the innerHTML attribute of a <td> and I need to get that info back in the form submittal. The <td> corrosponds to an <asp:TableCell> on the server-side, where the Text attribute is set to an initial value. The user cannot enter the value in this particular field. Instead, its value is set by me (via client-side script) based on actions that the user performs. But this field is useless to me if I can't see its value on the server-side as well. I'd like to avoid using a read-only textbox, because those are difficult to resize dynamically. Can an <asp:Label> be used as form data? Is there any way to achive this without letting the user manually enter the data? Or is there a simpler way to store a string as a variable somewhere and send it back as form-data?

    Read the article

  • Resetting a PChar variable

    - by scott-thornton
    Hi, I don't know much about delphi win 32 programming, but I hope someone can answer my question. I get duplicate l_sGetUniqueIdBuffer saved into the database which I want to avoid. The l_sGetUniqueIdBuffer is actually different ( the value of l_sAuthorisationContent is xml, and I can see a different value generated by the call to getUniqueId) between rows. This problem is intermittant ( duplicates are rare...) There is only milliseconds difference between the update date between the rows. Given: ( unnesseary code cut out) l_sGetUniqueIdBuffer: PChar; FOutputBufferSize : integer; FOutputBufferSize := 1024; while( not dmAccomClaim.ADOQuClaimIdentification.Eof ) do begin // Get a unique id for the request l_sGetUniqueIdBuffer := AllocMem (FOutputBufferSize); l_returnCode := getUniqueId (m_APISessionId^, l_sGetUniqueIdBuffer, FOutputBufferSize); dmAccomClaim.ADOQuAddContent.Active := False; dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pContent').Value := (WideString(l_sAuthorisationContent)); dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pClaimId').Value := dmAccomClaim.ADOQuClaimIdentification.FieldByName('SB_CLAIM_ID').AsString; dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pUniqueId').Value := string(l_sGetUniqueIdBuffer); dmAccomClaim.ADOQuAddContent.ExecSQL; FreeMem( l_sAuthorisationContent, l_iAuthoriseContentSize ); FreeMem( l_sGetUniqueIdBuffer, FOutputBufferSize ); end; I guess i need to know, is the value in l_sGetUniqueIdBuffer being reset for every row??

    Read the article

  • Using variables inside macros in SQL

    - by Tim
    Hello I'm wanting to use variables inside my macro SQL on Teradata. I thought I could do something like the following: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ DECLARE V_LAST_RUN_DATE TIMESTAMP; /* Get last run date and store in V_LAST_RUN_DATE */ SELECT LastDate INTO V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,V_LAST_RUN_DATE ,CURRENT_TIMESTAMP ); ); However, that didn't work, so I thought of this instead: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ CREATE VOLATILE TABLE MacroVars AS ( SELECT LastDate AS V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; ) WITH DATA ON COMMIT PRESERVE ROWS; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,SELECT V_LAST_RUN_DATE FROM MacroVars ,CURRENT_TIMESTAMP ); ); I can do what I'm looking for with a Stored Procedure, however I want to avoid for performance. Do you have any ideas about this? Is there anything else I can try? Cheers Tim

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • Pass a data.frame column name to a function

    - by Kevin Middleton
    I'm trying to write a function to accept a data.frame (x) and a column from it. The function performs some calculations on x and later returns another data.frame. I'm stuck on the best-practices method to pass the column name to the function. The two minimal examples fun1 and fun2 below produce the desired result, being able to perform operations on x$column, using max() as an example. However, both rely on the seemingly (at least to me) inelegant (1) call to substitute() and possibly eval() and (2) the need to pass the column name as a character vector. fun1 <- function(x, column){ do.call("max", list(substitute(x[a], list(a = column)))) } fun2 <- function(x, column){ max(eval((substitute(x[a], list(a = column))))) } df <- data.frame(A = 1:20, B = rnorm(10)) fun1(df, "B") fun2(df, "B") I would like to be able to call the function as fun(df, B), for example. Other options I have considered but have not tried: Pass column as an integer of the column number. I think this would avoid substitute(). Ideally, the function could accept either. with(x, get(column)), but, even if it works, I think this would still require substitute Make use of formula() and match.call(), neither of which I have much experience with. Subquestion: Is do.call() preferred over eval()? Thanks, Kevin

    Read the article

  • How to restrict access to a class's data based on state?

    - by Marcus Swope
    In an ETL application I am working on, we have three basic processes: Validate and parse an XML file of customer information from a third party Match values received in the file to values in our system Load customer data in our system The issue here is that we may need to display the customer information from any or all of the above states to an internal user and there is data in our customer class that will never be populated before the values have been matched in our system (step 2). For this reason, I would like to have the values not even be available to be accessed when the customer is in this state, and I would like to have to avoid some repeated logic everywhere like: if (customer.IsMatched) DisplayTextOnWeb(customer.SomeMatchedValue); My first thought for this was to add a couple interfaces on top of Customer that would only expose the properties and behaviors of the current state, and then only deal with those interfaces. The problem with this approach is that there seems to be no good way to move from an ICustomerWithNoMatchedValues to an ICustomerWithMatchedValues without doing direct casts, etc... (or at least I can't find one). I can't be the first to have come across this, how do you normally approach this? As a last caveat, I would like for this solution to play nice with FluentNHibernate :) Thanks in advance...

    Read the article

  • Safe way to set computed environment variables

    - by sfink
    I have a bash script that I am modifying to accept key=value pairs from stdin. (It is spawned by xinetd.) How can I safely convert those key=value pairs into environment variables for subprocesses? I plan to only allow keys that begin with a predefined prefix "CMK_", to avoid IFS or any other "dangerous" variable getting set. But the simplistic approach function import () { local IFS="=" while read key val; do case "$key" in CMK_*) eval "$key=$val";; esac done } is horribly insecure because $val could contain all sorts of nasty stuff. This seems like it would work: shopt -s extglob function import () { NORMAL_IFS="$IFS" local IFS="=" while read key val; do case "$key" in CMK_*([a-zA-Z_]) ) IFS="$NORMAL_IFS" eval $key='$val' IFS="=" ;; esac done } but (1) it uses the funky extglob thing that I've never used before, and (2) it's complicated enough that I can't be comfortable that it's secure. My goal, to be specific, is to allow key=value settings to pass through the bash script into the environment of called processes. It is up to the subprocesses to deal with potentially hostile values getting set. I am modifying someone else's script, so I don't want to just convert it to Perl and be done with it. I would also rather not change it around to invoke the subprocesses differently, something like #!/bin/sh ...start of script... perl -nle '($k,$v)=split(/=/,$_,2); $ENV{$k}=$v if $k =~ /^CMK_/; END { exec("subprocess") }' ...end of script...

    Read the article

  • Text size for drop down menu/input select not working in Safari

    - by Nick
    Hello All, First question... I'm having trouble getting ANY of the Drop down menu/Input Select's to appear with size 18 font in Safari. Works fine in FF. Code: <form class="form"> <select name="make"> <option value="0"> All</option> </select> </form> Css: .form input{ font-size:18px; margin-bottom:0px; } Any ideas? Can view live at [http://www.motolistr.com][1] Best, Nick EDIT 1: Thanks for the quick reply. I added a style to the select itself to avoid confusion. I tried; <select name='make' style='font-size: 18pt;'> </select> And <select name='make' style='font-size: 18px;'> </select> And <select name='make' style='font-size: 1.3em;'> </select> Still not working in SAFARI...Again FF works fine with all 3. Best, Nick

    Read the article

  • How to mult-thread this?

    - by WilliamKF
    I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); while (thread2IsExclusive) { } } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. while (!thread1IsReady()) { } Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.

    Read the article

  • Throttling CPU/Memory usage of a Thread in Java?

    - by Nalandial
    I'm writing an application that will have multiple threads running, and want to throttle the CPU/memory usage of those threads. There is a similar question for C++, but I want to try and avoid using C++ and JNI if possible. I realize this might not be possible using a higher level language, but I'm curious to see if anyone has any ideas. EDIT: Added a bounty; I'd like some really good, well thought out ideas on this. EDIT 2: The situation I need this for is executing other people's code on my server. Basically it is completely arbitrary code, with the only guarantee being that there will be a main method on the class file. Currently, multiple completely disparate classes, which are loaded in at runtime, are executing concurrently as separate threads. I inherited this code (the original author is gone). The way it's written, it would be a pain to refactor to create separate processes for each class that gets executed. If that's the only good way to limit memory usage via the VM arguments, then so be it. But I'd like to know if there's a way to do it with threads. Even as a separate process, I'd like to be able to somehow limit its CPU usage, since as I mentioned earlier, several of these will be executing at once. I don't want an infinite loop to hog up all the resources. EDIT 3: An easy way to approximate object size is with java's Instrumentation classes; specifically, the getObjectSize method. Note that there is some special setup needed to use this tool.

    Read the article

  • vector::erase with pointer member

    - by matt
    I am manipulating vectors of objects defined as follow: class Hyp{ public: int x; int y; double wFactor; double hFactor; char shapeNum; double* visibleShape; int xmin, xmax, ymin, ymax; Hyp(int xx, int yy, double ww, double hh, char s): x(xx), y(yy), wFactor(ww), hFactor(hh), shapeNum(s) {visibleShape=0;shapeNum=-1;}; //Copy constructor necessary for support of vector::push_back() with visibleShape Hyp(const Hyp &other) { x = other.x; y = other.y; wFactor = other.wFactor; hFactor = other.hFactor; shapeNum = other.shapeNum; xmin = other.xmin; xmax = other.xmax; ymin = other.ymin; ymax = other.ymax; int visShapeSize = (xmax-xmin+1)*(ymax-ymin+1); visibleShape = new double[visShapeSize]; for (int ind=0; ind<visShapeSize; ind++) { visibleShape[ind] = other.visibleShape[ind]; } }; ~Hyp(){delete[] visibleShape;}; }; When I create a Hyp object, allocate/write memory to visibleShape and add the object to a vector with vector::push_back, everything works as expected: the data pointed by visibleShape is copied using the copy-constructor. But when I use vector::erase to remove a Hyp from the vector, the other elements are moved correctly EXCEPT the pointer members visibleShape that are now pointing to wrong addresses! How to avoid this problem? Am I missing something?

    Read the article

  • using the ASP.NET Caching API via method annotations in C#

    - by craigmoliver
    In C#, is it possible to decorate a method with an annotation to populate the cache object with the return value of the method? Currently I'm using the following class to cache data objects: public class SiteCache { // 7 days + 6 hours (offset to avoid repeats peak time) private const int KeepForHours = 174; public static void Set(string cacheKey, Object o) { if (o != null) HttpContext.Current.Cache.Insert(cacheKey, o, null, DateTime.Now.AddHours(KeepForHours), TimeSpan.Zero); } public static object Get(string cacheKey) { return HttpContext.Current.Cache[cacheKey]; } public static void Clear(string sKey) { HttpContext.Current.Cache.Remove(sKey); } public static void Clear() { foreach (DictionaryEntry item in HttpContext.Current.Cache) { Clear(item.Key.ToString()); } } } In methods I want to cache I do this: [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var ck = string.Format("SiteSettings_SelectOne_Name-Name_{0}-", Name.ToLower()); var dt = (DataTable)SiteCache.Get(ck); if (dt == null) { dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); SiteCache.Set(ck, dt); } var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; } Is it possible to separate those concerns like so: (notice the new annotation) [CacheReturnValue] [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; }

    Read the article

  • How to override the default init.tcl

    - by Sean Murphy
    I'm working on a project where I want to make use of TCL as the command interpreter. I have a working c library object which I can load from within the tcl shell but my problem is finding a way to automatically do this while starting a tclsh. Essentially my ultimate goal is to be able to run a script and have it load my library and run some initial startup tcl code before dropping me back to the tclsh command prompt in interactive mode. e.g. tclsh -f myscript.tcl --then-switch-to-interactive or EXPORT TCLINIT=myscript.tcl tclsh The basic goal is to avoid having to distribute tclsh but rather rely in local user installations of tcl. All I would like to distribute is my library, a startup script and a shell command to launch the tclsh with the library preloaded. I've tried using the environment variables TCLINIT and TCL_LIBRARY but they seem to have no effect. The only workable solutions I've found so far are to add "source myscript.tcl" to either the end of /usr/share/tcltk/tcl8.5.init.tcl or ~/.tclshrc However both of these "solutions" are non perfect as they require modification of the default users workspace. It strikes me that there must be a way to handle this in TCL, but my research so far hasn't yielded anything. Does anyone have any suggestions?

    Read the article

  • Bioperl, equivalent of IO::ScalarArray for array of Seq objects?

    - by Ryan Thompson
    In perl, we have IO::ScalarArray for treating the elements of an array like the lines of a file. In BioPerl, we have Bio::SeqIO, which can produce a filehandle that reads and writes Bio::Seq objects instead of strings representing lines of text. I would like to do a combination of the two: I would like to obtain a handle that reads successive Bio::Seq objects from an array of such objects. Is there any way to do this? Would it be trivial for me to implement a module that does this? My reason for wanting this is that I would like to be able to write a subroutine that accepts either a Bio::SeqIO handle or an array of Bio::Seq objects, and I'd like to avoid writing separate loops based on what kind of input I get. Perhaps the following would be better than writing my own IO module? sub process_sequences { my $input = $_[0]; # read either from array of Bio::Seq or from Bio::SeqIO my $nextseq; if (ref $input eq 'ARRAY') { my $pos = 0 $nextseq = sub { return $input->[$pos++] if $pos < @$input}; } } else { $nextseq = sub { $input->getline(); } } while (my $seq = $nextseq->()) { do_cool_stuff_with($seq) } }

    Read the article

  • Set fields with instrospection - Problem with String.valueOf(String)

    - by fabb
    Hey there! I'm setting public fields of the Object 'this' via reflection. Both the field name and the value are given as String. I use several various field types: Boolean, Integer, Float, Double, an own enum, and a String. It works with all of them except with a String. The exception that gets thrown is that no method with the Signature String.valueOf(String) exists... Now I use a dirty instanceof workaround to detect if each field is a String and in that case just copy the value to the field. private void setField(String field, String value) throws Exception { Field wField = this.getClass().getField(field); if(wField.get(this) instanceof String){ //TODO dirrrrty hack //stupid workaround as java.lang.String.valueOf(java.lang.String) fails... wField.set(this, value); }else{ Method parseMethod = wField.getType().getMethod("valueOf", new Class[]{String.class}); wField.set(this, parseMethod.invoke(wField, value)); } } Any ideas how to avoid that workaround? Do you think java.lang.String should support the method valueOf(String)? thanks, fabb

    Read the article

  • Handling XMLHttpRequest to call external application

    - by Ian
    I need a simple way to use XMLHttpRequest as a way for a web client to access applications in an embedded device. I'm getting confused trying to figure out how to make something thin and light that handles the XMLHttpRequests coming to the web server and can translate those to application calls. The situation: The web client using Ajax (ExtJS specifically) needs to send and receive asynchronously to an existing embedded application. This isn't just to have a thick client/thin server, the client needs to run background checking on the application status. The application can expose a socket interface, with a known set of commands, events, and configuration values. Configuration could probably be transmitted as XML since it comes from a SQLite database. In between the client and app is a lighttpd web server running something that somehow handles the translation. This something is the problem. What I think I want: Lighttpd can use FastCGI to route all XMLHttpRequest to an external process. This process will understand HTML/XML, and translate between that and the application's language. It will have custom logic to simulate pushing notifications to the client (receive XMLHttpRequest, don't respond until the next notification is available). C/C++. I'd really like to avoid installing Java/PHP/Perl on an embedded device. So I'll need more low level understanding. How do I do this? Are there good C++ libraries for interpreting the CGI headers and HTML so that I don't have to do any syntax processing, I can just deal with the request/response contents? Are there any good references to exactly what goes on, server side, when handling the XMLHttpRequest and CGI interfaces? Is there any package that does most of this job already, or will I have to build the non-HTTP/CGI stuff from scratch? Thanks for any help! I am really having trouble learning about the server side of these technologies.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >