Search Results

Search found 9932 results on 398 pages for 'pseudo element'.

Page 247/398 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Groovy & Grails Concurrency ( quartz, executor )

    - by Pietro
    What I'm trying to do is to run multiple threads at some starting time. Those threads must stay alive for 90minutes after start. During the 90minutes they execute something after a random sleep time (ex: 5minutes to 15minutes). Here is a pseudo code on how I would implement it. The problem is that doing it in this way the threads run in an unexpected way. How can I implement correctly something like this? Class MyJob { static triggers = { cron name: 'first', cronExpression: "0 30 21 * * FRI" cron name: 'second', cronExpression: "0 30 19 * * FRI" cron name: 'third', cronExpression: "0 30 17 * * FRI" def myService def execute() { switch( between trigger name ) case 'first': model = Model.findByAttribute(...) ... myService.run( model, start_time ) break; ... } } class MyService { def run( model, start_time ) { def end_time = end_time.plusMinutes(90) model.fields.each( field -> Thread.start { executeSomeTasks( field, start_time, end_time ) } ) } def executeSomeTasks( field, start_time, end_time ) { while( start_time < end_time ) { ...do something ... sleep( Random.nextInt( 1000 ) ); } } }

    Read the article

  • Auto scale and rotate images

    - by Dave Jarvis
    Given: two images of the same subject matter; the images have the same resolution, colour depth, and file format; the images differ in size and rotation; and two lists of (x, y) co-ordinates that correlate the images. I would like to know: How do you transform the larger image so that it visually aligns to the second image? (Optional.) What are the minimum number of points needed to get an accurate transformation? (Optional.) How far apart do the points need to be to get an accurate transformation? The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following: Input two images (e.g., TIFFs). Click several anchor points on the small image. Click the several corresponding anchor points on the large image. Transform the large image such that it maps to the small image by aligning the anchor points. This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.) Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.

    Read the article

  • is this a design pattern?

    - by Michel
    Hi all, i have to build some financial data report, and for making the calculation, there are a lot of 'if then' situations: if it's a large client, subtract 10%, if it's postal code equals '10101', add 10%, if the day is on a saturday, make a difficult calculation etc. so i once read about this kind of example, and what they did was (hope i remember well) create a class with some base info and make it possible to add all kinds of calculationobjects to it. So to put what i remembered in pseudo code Basecalc bc = new baseCalc(); //put the info in the bc so other objects can do their if bc.Add(new Largecustomercalc()); bc.Add(new PostalcodeCalc()); bc.add(new WeekdayCalc()); the the bc would run the Calc() methods of all of the added Calc objects. As i type this i think all the Calc objects must be able to see the Basecalc properties to correctly perform their calculation logic. So all the if's are in the different Calc objects and not ALL in the Basecalc. does this make sense? I was wondering if this is some kind of design pattern?

    Read the article

  • Big-O for GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is an arbitrary constants Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • How to combine apache requests?

    - by Bruce
    To give you the situation in abstract: I have an ajax client that often needs to retrieve 3-10 static documents from the server. Those 3-10 documents are selected by the client out of about 100 documents in total. I have no way of knowing in advance which 3-10 documents the client will require. Additionally, those 100 documents are generated from database content, and so change over time. It seems messy to me to have to make 10 ajax requests for 10 separate documents. My first thought was to write a jsp that could use the include action. ie in pseudo code for (param in params){ jsp:include page="[param]" } But it turns out the tomcat doesn't just include the html resource, it recompiles it, generating a class file every time, which also seems wasteful. Does any one know of a neat solution for combining apache requests to static files to make one request, rather than several, but without the overhead of, for example, tomcat generating extra class files for each static file and regenerating them each time the static file changes? Thanks! Hopefully my question is clear - it's a bit long-winded.

    Read the article

  • OpenMP: Get total number of running threads

    - by Konrad Rudolph
    I need to know the total number of threads that my application has spawned via OpenMP. Unfortunately, the omp_get_num_threads() function does not work here since it only yields the number of threads in the current team. However, my code runs recursively (divide and conquer, basically) and I want to spawn new threads as long as there are still idle processors, but no more. Is there a way to get around the limitations of omp_get_num_threads and get the total number of running threads? If more detail is required, consider the following pseudo-code that models my workflow quite closely: function divide_and_conquer(Job job, int total_num_threads): if job.is_leaf(): # Recurrence base case. job.process() return left, right = job.divide() current_num_threads = omp_get_num_threads() if current_num_threads < total_num_threads: # (1) #pragma omp parallel num_threads(2) #pragma omp section divide_and_conquer(left, total_num_threads) #pragma omp section divide_and_conquer(right, total_num_threads) else: divide_and_conquer(left, total_num_threads) divide_and_conquer(right, total_num_threads) job = merge(left, right) If I call this code with a total_num_threads value of 4, the conditional annotated with (1) will always evaluate to true (because each thread team will contain at most two threads) and thus the code will always spawn two new threads, no matter how many threads are already running at a higher level. I am searching for a platform-independent way of determining the total number of threads that are currently running in my application.

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Array, change color, as3

    - by pixelGreaser
    Hi Thanks for the help Yesterday, but I have on more question. How can I change color of text on certain words? My animation plays the text animation of THIS SALE IS RED HOT!!! I want RED HOT it to be red. It seems the array can be indexed in such a way to switch the color from Blue to Red. MY BANNER ADD var myArray:Array = ["THIS","SALE","IS","RED HOT!!!",]; var tm:Timer = new Timer(500); tm.addEventListener(TimerEvent.TIMER, countdown); function countdown(event:TimerEvent) { tx.text = myArray[(tm.currentCount-1)%myArray.length]; } tm.start(); tx.textColor = 0x0000FF; Cont...PSEUDO CODE //var myArray:Array = ["This","Sale","is","RED HOT!!!",]; var spliceRedhot = myArray.splice(-1); //trace(myArray[2]); trace(spliceRedhot); function mySplice(e:Event):void{ if (spliceRedhot = 4){ //Make RED HOT!!! red tx.textColor = 0xFF0000; } else{ //Text is Blue again tx.textColor = 0x0000FF; } }

    Read the article

  • streaming xml pretty printer in C/C++ using expat or libxml2?

    - by Mark Zeren
    I have a library that outputs xml without whitespace all on one line. In some cases I'd like to pretty print that output. I'm looking for a BSD-ish licensed C/C++ library or sample code that will take a raw xml byte stream and pretty print it. Here's some pseudo code showing one way that I might use this functionality: void my_write(const char* buf, int len); PrettyPrinter pp(bind(&my_write)); while (...) { // ... get some more xml ... const char* buf = xmlSource.get_buf(); int len = xmlSource.get_buf_len(); int written = pp.write(buf, len); // calls my_write with pretty printed xml // ... error handling, maybe call write again, etc. ... } I'd like to avoid instantiating a DOM representation. I already have dependencies on the expat and libxml2 shared libraries, and I'd rather not add any more shared library dependencies.

    Read the article

  • Average over a timeframe with missing data

    - by BHare
    Assuming a table such as: UID Name Datetime Users 4 Room 4 2012-08-03 14:00:00 3 2 Room 2 2012-08-03 14:00:00 3 3 Room 3 2012-08-03 14:00:00 1 1 Room 1 2012-08-03 14:00:00 2 3 Room 3 2012-08-03 14:15:00 1 2 Room 2 2012-08-03 14:15:00 4 1 Room 1 2012-08-03 14:15:00 3 1 Room 1 2012-08-03 14:30:00 6 1 Room 1 2012-08-03 14:45:00 3 2 Room 2 2012-08-03 14:45:00 7 3 Room 3 2012-08-03 14:45:00 8 4 Room 4 2012-08-03 14:45:00 4 I wanted to get the average user count of each room (1,2,3,4) from the time 2PM to 3PM. The problem is that sometimes the room may not "check in" at the 15 minute interval time, so the assumption has to be made that the previous last known user count is still valid. For example the check-in's for 2012-08-03 14:15:00 room 4 never checked in, so it must be assumed that room 4 had 3 users at 2012-08-03 14:15:00 because that is what it had at 2012-08-03 14:00:00 This follows on through so that the average user count I am looking for is as follows: Room 1: (2 + 3 + 6 + 3) / 4 = 3.5 Room 2: (3 + 4 + 4 + 7) / 4 = 4.5 Room 3: (1 + 1 + 1 + 8) / 4 = 2.75 Room 4: (3 + 3 + 3 + 4) / 4 = 3.25 where # is the assumed number based on the previous known check-in. I am wondering if it's possible to so this with SQL alone? if not I am curious of a ingenious PHP solution that isn't just bruteforce math, as such as my quick inaccurate pseudo code: foreach ($rooms_id_array as $room_id) { $SQL = "SELECT * FROM `table` WHERE (`UID` == $room_id && `Datetime` >= 2012-08-03 14:00:00 && `Datetime` <= 2012-08-03 15:00:00)"; $result = query($SQL); if ( count($result) < 4 ) { // go through each date and find what is missing, and then go to previous date and use that instead } else { foreach ($result) $sum += $result; $avg = $sum / 4; } }

    Read the article

  • How to change Hibernate´s auto persistance strategy

    - by Kristofer Borgstrom
    I just noted that my hibernate entities are automatically persisted to the database (or at least to cache) before I call any save() or update() method. To me this is a pretty strange default behavior but ok, as long as I can disable it, it´s fine. The problem I have is I want to update my entity´s state (from 1 to 2) only if the entity in the database still has the state it had when I retrieved [1] (this is to eliminate concurrency issues when another server is updating this same object). For this reason I have created a custom NamedQuery that will only update the entity if state is 1. So here is some pseudo-code: //Get the entity Entity item = dao.getEntity(); item.getState(); //==1 //Update the entity item.setState(2); //Here is the problem, this effectively changes the state of my entity braking my query that verifies that state is still == 1. dao.customUpdate(item); //Returns 0 rows changes since state != 1. So, how do I make sure the setters don´t change the state in cache/db? Thanks, Kristofer

    Read the article

  • Invoking different methods on threads

    - by Kraken
    I have a main process main. It creates 10 threads (say) and then what i want to do is the following: while(required){ Thread t= new Thread(new ClassImplementingRunnable()); t.start(); counter++; } Now i have the list of these threads, and for each thread i want to do a set of process, same for all, hence i put that implementation in the run method of ClassImplementingRunnable. Now after the threads have done their execution, i wan to wait for all of them to stop, and then evoke them again, but this time i want to do them serially not in parallel. for this I join each thread, to wait for them to finish execution but after that i am not sure how to evoke them again and run that piece of code serially. Can i do something like for(each thread){ t.reevoke(); //how can i do that. t.doThis(); // Also where does `dothis()` go, given that my ClassImplementingRunnable is an inner class. } Also, i want to use the same thread, i.e. i want the to continue from where they left off, but in a serial manner. I am not sure how to go about the last piece of pseudo code. Kindly help. Working with with java.

    Read the article

  • How To perform a SQL Query to DataTable Operation That Can Be Cancelled

    - by David W
    I tried to make the title as specific as possible. Basically what I have running inside a backgroundworker thread now is some code that looks like: SqlConnection conn = new SqlConnection(connstring); SqlCommand cmd = new SqlCommand(query, conn); conn.Open(); SqlDataAdapter sda = new SqlDataAdapter(cmd); sda.Fill(Results); conn.Close(); sda.Dispose(); Where query is a string representing a large, time consuming query, and conn is the connection object. My problem now is I need a stop button. I've come to realize killing the backgroundworker would be worthless because I still want to keep what results are left over after the query is canceled. Plus it wouldn't be able to check the canceled state until after the query. What I've come up with so far: I've been trying to conceptualize how to handle this efficiently without taking too big of a performance hit. My idea was to use a SqlDataReader to read the data from the query piece at a time so that I had a "loop" to check a flag I could set from the GUI via a button. The problem is as far as I know I can't use the Load() method of a datatable and still be able to cancel the sqlcommand. If I'm wrong please let me know because that would make cancelling slightly easier. In light of what I discovered I came to the realization I may only be able to cancel the sqlcommand mid-query if I did something like the below (pseudo-code): while(reader.Read()) { //check flag status //if it is set to 'kill' fire off the kill thread //otherwise populate the datatable with what was read } However, it would seem to me this would be highly ineffective and possibly costly. Is this the only way to kill a sqlcommand in progress that absolutely needs to be in a datatable? Any help would be appreciated!

    Read the article

  • Mysql count and sum from two diferent tables

    - by Agent_x
    Hi all, i have a problem with some querys in php and mysql: I have 2 diferent tables with one field in common: table 1 id | hits | num_g | cats | usr_id |active 1 | 10 | 11 | 1 | 53 | 1 2 | 13 | 16 | 3 | 53 | 1 1 | 10 | 22 | 1 | 22 | 1 1 | 10 | 21 | 3 | 22 | 1 1 | 2 | 6 | 2 | 11 | 1 1 | 11 | 1 | 1 | 11 | 1 table 2 id | usr_id | points 1 | 53 | 300 Now i use this statement to sum just the total from the table 1 every id count + 1 too SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) AS tot_h FROM table1 WHERE usr_id!='0' GROUP BY usr_id ASC LIMIT 0 , 15 and i get the total for each usr_id usr_id| tot_h | 53 | 50 22 | 63 11 | 20 until here all is ok, now i have a second table with extra points (table2) I try this: SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) + (SELECT points FROM table2 WHERE usr_id != '0' ) AS tot_h FROM table1 WHERE usr_id != '0' GROUP BY usr_id ASC LIMIT 0 , 15 but it seems to sum the 300 extra points to all users: usr_id| tot_h | 53 | 350 22 | 363 11 | 320 Now how i can get the total like the first try but + the secon table in one statement? because now i have just one entry in the second table but i can be more there. thanks for all the help. =============================================================================== hi thomas thanks for your reply, i think is in the right direction, but im getting weirds results, like usr_id | tot_h 22 | NULL <== i think the null its because that usr_id as no value in the table2 53 | 1033 Its like the second user is getting all the the values. then i try this one: SELECT table1.usr_id, COUNT( table1.id ) + SUM( table1.num_g + table1.hits + table2.points ) AS tot_h FROM table1 LEFT JOIN table2 ON table2.usr_id = table1.usr_id WHERE table1.usr_id != '0' AND table2.usr_id = table1.usr_id GROUP BY table1.usr_id ASC Same result i just get the sum of all values and not by each user, i need something like this result: usr_id | tot_h 53 | 53 <==== plus 300 points on table1 22 | 56 <==== plus 100 points on table2 /////////the result i need //////////// usr_id | tot_h 53 | 353 <==== plus 300 points on table2 22 | 156 <==== plus 100 points on table2 I think the structure need to be something like this Pseudo statements ;) from table1 count all id to get the number of record where the usr_id are then sum hits + num_g and from table2 select the extra points where the usr_id are the same as table1 and get teh result: usr_id | tot_h 53 | 353 22 | 156

    Read the article

  • JavaCC: How can I specify which token(s) are expected in certain context?

    - by java.is.for.desktop
    Hello, everyone! I need to make JavaCC aware of a context (current parent token), and depending on that context, expect different token(s) to occur. Consider the following pseudo-code: TOKEN <abc> { "abc*" } // recognizes "abc", "abcd", "abcde", ... TOKEN <abcd> { "abcd*" } // recognizes "abcd", "abcde", "abcdef", ... TOKEN <element1> { "element1" "[" expectOnly(<abc>) "]" } TOKEN <element2> { "element2" "[" expectOnly(<abcd>) "]" } ... So when the generated parser is "inside" a token named "element1" and it encounter "abcdef" it recognizes it as <abc>, but when its "inside" a token named "element2" it recognizes the same string as <abcd>. element1 [ abcdef ] // aha! it can only be <abc> element2 [ abcdef ] // aha! it can only be <abcd> If I'm not wrong, it would behave similar to more complex DTD definitions of an XML file. So, how can one specify, in which "context" which token(s) are valid/expected? NOTE: It would be not enough for my real case to define a kind of "hierarchy" of tokens, so that "abcdef" is always first matched against <abcd> and than <abc>. I really need context-aware tokens.

    Read the article

  • How to pass a Lambda Expression as method parameter with EF

    - by Registered User
    How do I pass an EF expression as a method argument? To illustrate my question I have created a pseudo code example: The first example is my method today. The example utilizes EF and a Fancy Retry Logic. What I need to do is to encapsulate the Fancy Retry Logic so that it becomes more generic and does not duplicate. In the second example is how I want it to be, with a helper method that accepts the EF expression as an argument. This would be a trivial thing to do with SQL, but I want to do it with EF so that I can benefit from the strongly typed objects. First Example: public static User GetUser(String userEmail) { using (MyEntities dataModel = new MyEntities ()) { var query = FancyRetryLogic(() => { (dataModel.Users.FirstOrDefault<User>(x => x.UserEmail == userEmail))); }); return query; } } Second Example: T RetryHelper<T>(Expression<Func<T, TValue>> expression) { using (MyEntities dataModel = new (MyEntities ()) { var query = FancyRetryLogic(() => { return dataModel.expression }); } } public User GetUser(String userEmail) { return RetryHelper<User>(<User>.FirstOrDefault<User>(x => x.UserEmail == userEmail)) }

    Read the article

  • Does breaking chained Select()s in LINQ to objects hurt performance?

    - by Justin
    Take the following pseudo C# code: using System; using System.Data; using System.Linq; using System.Collections.Generic; public IEnumerable<IDataRecord> GetRecords(string sql) { // DB logic goes here } public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; var ids = GetRecords(sql).Select(record => (record["EmployerID"] as int?) ?? 0); return ids.Select(employerID => new Employer(employerID) as IEmployer); } Would it be faster if the two Select() calls were combined? Is there an extra iteration in the code above? Is the following code faster? public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; return Query.Records(sql).Select(record => new Employer((record["EmployerID"] as int?) ?? 0) as IEmployer); } I think the first example is more readable if there is no difference in performance.

    Read the article

  • What is the proper syntax for getting a Makefile to print the output directory of one of its output zip files?

    - by 9exceptionThrower9
    I'm trying to edit an Android Makefile in the hopes of getting it to print out the directory (path) location of one the ZIP files it creates. Ideally, since the build process is long and does many things, I would like for it print out the pathway to the ZIP file to a text file in a different directory I can access later: Pseudo-code idea: # print the desired pathway to output file print(getDirectoryOf(variable-name.zip)) > ~/Desktop/location_of_file.txt The Makefile snippet where I would like to insert this new bit of code is shown below. I am interested in finding the directory of $(name).zip (that is specific file I want to locate): # ----------------------------------------------------------------- # A zip of the directories that map to the target filesystem. # This zip can be used to create an OTA package or filesystem image # as a post-build step. # name := $(TARGET_PRODUCT) ifeq ($(TARGET_BUILD_TYPE),debug) name := $(name)_debug endif name := $(name)-target_files-$(FILE_NAME_TAG) intermediates := $(call intermediates-dir-for,PACKAGING,target_files) BUILT_TARGET_FILES_PACKAGE := $(intermediates)/$(name).zip $(BUILT_TARGET_FILES_PACKAGE): intermediates := $(intermediates) $(BUILT_TARGET_FILES_PACKAGE): \ zip_root := $(intermediates)/$(name) # $(1): Directory to copy # $(2): Location to copy it to # The "ls -A" is to prevent "acp s/* d" from failing if s is empty. define package_files-copy-root if [ -d "$(strip $(1))" -a "$$(ls -A $(1))" ]; then \ mkdir -p $(2) && \ $(ACP) -rd $(strip $(1))/* $(2); \ fi endef

    Read the article

  • C++: inheritance problem

    - by Helltone
    It's quite hard to explain what I'm trying to do, I'll try: Imagine a base class A which contains some variables, and a set of classes deriving from A which all implement some method bool test() that operates on the variables inherited from A. class A { protected: int somevar; // ... }; class B : public A { public: bool test() { return (somevar == 42); } }; class C : public A { public: bool test() { return (somevar > 23); } }; // ... more classes deriving from A Now I have an instance of class A and I have set the value of somevar. int main(int, char* []) { A a; a.somevar = 42; Now, I need some kind of container that allows me to iterate over the elements i of this container, calling i::test() in the context of a... that is: std::vector<...> vec; // push B and C into vec, this is pseudo-code vec.push_back(&B); vec.push_back(&C); bool ret = true; for(i = vec.begin(); i != vec.end(); ++i) { // call B::test(), C::test(), setting *this to a ret &= ( a .* (&(*i)::test) )(); } return ret; } How can I do this? I've tried two methods: forcing a cast from B::* to A::*, adapting a pointer to call a method of a type on an object of a different type (works, but seems to be bad); using std::bind + the solution above, ugly hack; changing the signature of bool test() so that it takes an argument of type const A& instead of inheriting from A, I don't really like this solution because somevar must be public.

    Read the article

  • Image Gurus: Optimize my Python PNG transparency function

    - by ozone
    I need to replace all the white(ish) pixels in a PNG image with alpha transparency. I'm using Python in AppEngine and so do not have access to libraries like PIL, imagemagick etc. AppEngine does have an image library, but is pitched mainly at image resizing. I found the excellent little pyPNG module and managed to knock up a little function that does what I need: make_transparent.py pseudo-code for the main loop would be something like: for each pixel: if pixel looks "quite white": set pixel values to transparent otherwise: keep existing pixel values and (assuming 8bit values) "quite white" would be: where each r,g,b value is greater than "240" AND each r,g,b value is within "20" of each other This is the first time I've worked with raw pixel data in this way, and although works, it also performs extremely poorly. It seems like there must be a more efficient way of processing the data without iterating over each pixel in this manner? (Matrices?) I was hoping someone with more experience in dealing with these things might be able to point out some of my more obvious mistakes/improvements in my algorithm. Thanks!

    Read the article

  • How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per p

    - by Klinger
    I am communicating with an instrument (remote controlling it) and one of the things I need to do is to draw the instrument screen. In order to get the screen I issue a command and the instrument replies with an array of bytes that represents the screen. Below is what the instrument manual has to say about converting the response to the actual screen: The command retrieves the framebuffer data used for the display. It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as 320x240 characteres. The data is sent in RLE encoded form. To convert this data into a BMP for use in Windows, it needs to be turned into a 4BPP. Also note that BMP files are upside down relative to this data, i.e. the top display line is the last line in the BMP. I managed to unpack the data, but now I am stuck on how to actually go from the unpacked byte array to a bitmap. My background on this is pretty close to zero and my searches have not revealed much either. I am looking for directions and/or articles I could use to help me undestand how to get this done. Any code or even pseudo code would also help. :-) So, just to summarize it all: How to convert a byte array of 19200 bytes in size, where each byte represents 4 pixels (2 bits per pixel), to a bitmap arranged as 320x240 characters. Thanks in advance.

    Read the article

  • When do Symfony's user attributes get written to session?

    - by Rob Wilkerson
    I have a Symfony app that populates the "widgets" of a portal application and I'm noticing something (that seems) odd. The portal app has iframes that make calls to the Symfony app. On each of those calls, a random user key is passed on the query string. The Symfony app stores that key its session using myUser->setAttribute(). If the incoming value is different from what it has in session, it overwrites the session value. In pseudo-code (and applying a synchronous nature for clarity even though it may not exist): # Widget request arrives with ?foo=bar if the user attribute 'foo' does not equal 'bar' overwrite the user attribute 'foo' with 'bar' end What I'm noticing is that, on a portal page with multiple widgets (read: multiple requests coming in more or less simultaneously) where the value needs to be overwritten, each request is trying to overwrite. Is this a timing problem? When I look at the log prints, I'd expect the first request that arrives to overwrite and subsequent requests to see that the user attribute they received matches what was just put into cache by the initial request. In this scenario, it could be that subsequent requests begin (and are checked) even before the first one--the one that should overwrite the cached value--has completely finished. Are session values not really available to subsequent requests until one request has completed entirely or could there be something else that I'm missing? Thanks.

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >