Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 255/307 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • How to get SimpleRpcClient.Call() to be a blocking call to achieve synchronous communication with RabbitMQ?

    - by Nick Josevski
    In the .NET version (2.4.1) of RabbitMQ the RabbitMQ.Client.MessagePatterns.SimpleRpcClient has a Call() method with these signatures: public virtual object[] Call(params object[] args); public virtual byte[] Call(byte[] body); public virtual byte[] Call(IBasicProperties requestProperties, byte[] body, out IBasicProperties replyProperties); The problem: With various attempts, the method still continues to not block where I expect it to, so it's unable ever handle the response. The Question: Am I missing something obvious in the setup of the SimpleRpcClient, or earlier with the IModel, IConnection, or even PublicationAddress? More Info: I've also tried various paramater configurations of the QueueDeclare() method too with no luck. string QueueDeclare(string queue, bool durable, bool exclusive, bool autoDelete, IDictionary arguments); Some more reference code of my setup of these: IConnection conn = new ConnectionFactory{Address = "127.0.0.1"}.CreateConnection()); using (IModel ch = conn.CreateModel()) { var client = new SimpleRpcClient(ch, queueName); var queueName = ch.QueueDeclare("t.qid", true, true, true, null); ch.QueueBind(queueName, "exch", "", null); //HERE: does not block? var replyMessageBytes = client.Call(prop, msgToSend, out replyProp); } Looking elsewhere: Or is it likely there's an issue in my "server side" code? With and without the use of BasicAck() it appears the client has already continued execution.

    Read the article

  • "Special case" records for foreign key constraints

    - by keithjgrant
    Let's say I have a mysql table, called foo with a foreign key option_id constrained to the option table. When I create a foo record, the user may or may not have selected an option, and 'no option' is a viable selection. What is the best way to differentiate between 'null' (i.e. the user hasn't made a selection yet) and 'no option' (i.e. the user selected 'no option')? Right now, my plan is to insert a special record into the option table. Let's say that winds up with an id of 227 (this table already has a number of records at this point, so '1' isn't available). I have no need to access this record at a database level, and it would act as nothing more than a placeholder that the foreign key in the foo table can reference. So do I just hard-code '227' in my codebase when I'm creating 'foo' records where the user has selected 'no option'? The hard-coded id seems sloppy, and leaves room for error as the code is maintained down the road, but I'm not really sure of another approach.

    Read the article

  • Global hotkey capture in VB.net

    - by ggonsalv
    I want to have my app which is minimized to capture data selected in another app's window when the hot key is pressed. My app definitely doesn't have the focus. Additionally when the hot key is pressed I want to present a fading popup (Outlook style) so my app never gets focus. At a minimum I want to capture the Window name, Process ID and the selected data. The app which has focus is not my application? I know one option is to sniff the Clipboard, but are there any other solutions. This is to audit the rate of data-entry in to another system of which I have no control. It is a mainframe emulation client program(attachmate). The plan is complete data entry in Application X. Select a certain section of the screen in App X which is proof of data entry (transaction ID). Press the Magic Hotkey, which then 'sends' the selection to my App. From System.environment or system.Threading I can find the Windows logon. Similiarly I can also capture the time. All the data will be logged to SQL. Once Complete show Outlook style pop up saying the data entry has been logged. Any thoughts.

    Read the article

  • PHP file upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • Correct way to do timer function in Python

    - by bwawok
    Hi. I have a GUI application that needs to do something simple in the background (update a wx python progress bar, but that doesn't really matter). I see that there is a threading.timer class.. but there seems to be no way to make it repeat. So if I use the timer, I end up having to make a new thread on every single execution... like : import threading import time def DoTheDew(): print "I did it" t = threading.Timer(1, function=DoTheDew) t.daemon = True t.start() if __name__ == '__main__': t = threading.Timer(1, function=DoTheDew) t.daemon = True t.start() time.sleep(10) This seems like I am making a bunch of threads that do 1 silly thing and die.. why not write it as : import threading import time def DoTheDew(): while True: print "I did it" time.sleep(1) if __name__ == '__main__': t = threading.Thread(target=DoTheDew) t.daemon = True t.start() time.sleep(10) Am I missing some way to make a timer keep doing something? Either of these options seems silly... I am looking for a timer more like a java.util.Timer that can schedule the thread to happen every second... If there isn't a way in Python, which of my above methods is better and why?

    Read the article

  • From Sinatra Base object. Get port of application including the base object

    - by Poul
    I have a Sinatra::Base object that I would like to include in all of my web apps. In that base class I have the configure method which is called on start-up. I would like that configure code to 'register' that service with a centralized database. The information that needs to be sent when registering is the information on how to contact this web-service... things like host and port. I then plan on having a monitoring service that will spin over all registered services and occasionally ping them to make sure they are still up and running. In the configure method I am having trouble getting the port information. The 'self.settings.port' variable doesn't seem to work in this method. a) any ideas on how to get the port? I have the host. b) is there a sinatra plug-in that already does something like this so I don't have to write it myself? :-) //in my Sinatra::Base code. lets call it register_me.rb RegisterMe < Sinatra::Base configure do //save host and port information to database end get '/check_status' //return status end //in my web service code require register_me //at this point, sinatra will initialize the RegisterMe object and call configure post ('/blah') //example of a method for this particular web service end

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • SSH with Perl using file handles, not Net::SSH

    - by jorge
    Before I ask the question: I can not use cpan module Net::SSH, I want to but can not, no amount of begging will change this fact I need to be able to open an SSH connection, keep it open, and read from it's stdout and write to its stdin. My approach thus far has been to open it in a pipe, but I have not been able to advance past this, it dies straight away. That's what I have in mind, I understand this causes a fork to occur. I've written code accordingly for this fork (or so I think). Below is a skeleton of what I want, I just need the system to work. #!/usr/bin/perl use warnings; $| = 1; $pid = open (SSH,"| ssh user\@host"); if(defined($pid)){ if(!$pid){ #child while(<>){ print; } }else{ select SSH; $| = 1; select STDIN; #parent while(<>){ print SSH $_; while(<SSH>){ print; } } close(SSH); } } I know, from what it looks like, I'm trying to recreate "system('ssh user@host')," that is not my end goal, but knowing how to do that would bring me much closer to the end goal. Basically, I need a file handle to an open ssh connection where I can read from it the output and write to it input (not necessarily straight from my program's STDIN, anything I want, variables, yada yada) This includes password input. I know about key pairs, part of the end goal involves making key pairs, but the connection needs to happen regardless of their existence, and if they do not exist it's part of my plan to make them exist.

    Read the article

  • Developping an online music store

    - by Simon
    We need to develop an application to sell music online. No need to specify that all will be done quite legally and in so doing, we have to plan an interface to pay artists. However, we are confronted with a question: What is the best way to store music on the server? Should we save it on server's disk from a HTTP fileupload? Should we save via FTP or would it be wiser to save it in the database? No need to say that we need it to be the most safiest as possible. So maybe an https is required here. But, we what you think is the best way? Maybe other idea? Because in all HTTP case, upload songs (for administration) is quite long and boring, but easly linkable to a song that admin create in his web application comparativly to an FTP application to upload song on server and then list directory in admin part to link the correct uploaded song to the song informations in database. I know that its maybe not quite clear, it's because i'm french but tell me and I will try to explain part that you don't understand.

    Read the article

  • Do database engines other than SQL Server behave this way?

    - by Yishai
    I have a stored procedure that goes something like this (pseudo code) storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin select * from SOME_VIEW order by somecolumn end else if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent end All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this: storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin set param2 = null set param3 = null end if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that: Queries that ran fine for two years just stop working as the data grows. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine? This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

    Read the article

  • sybase - values from one table that aren't on another, on opposite ends of a 3-table join

    - by Lazy Bob
    Hypothetical situation: I work for a custom sign-making company, and some of our clients have submitted more sign designs than they're currently using. I want to know what signs have never been used. 3 tables involved: table A - signs for a company sign_pk(unique) | company_pk | sign_description 1 --------------------1 ---------------- small 2 --------------------1 ---------------- large 3 --------------------2 ---------------- medium 4 --------------------2 ---------------- jumbo 5 --------------------3 ---------------- banner table B - company locations company_pk | company_location(unique) 1 ------|------ 987 1 ------|------ 876 2 ------|------ 456 2 ------|------ 123 table C - signs at locations (it's a bit of a stretch, but each row can have 2 signs, and it's a one to many relationship from company location to signs at locations) company_location | front_sign | back_sign 987 ------------ 1 ------------ 2 987 ------------ 2 ------------ 1 876 ------------ 2 ------------ 1 456 ------------ 3 ------------ 4 123 ------------ 4 ------------ 3 So, a.company_pk = b.company_pk and b.company_location = c.company_location. What I want to try and find is how to query and get back that sign_pk 5 isn't at any location. Querying each sign_pk against all of the front_sign and back_sign values is a little impractical, since all the tables have millions of rows. Table a is indexed on sign_pk and company_pk, table b on both fields, and table c only on company locations. The way I'm trying to write it is along the lines of "each sign belongs to a company, so find the signs that are not the front or back sign at any of the locations that belong to the company tied to that sign." My original plan was: Select a.sign_pk from a, b, c where a.company_pk = b.company_pk and b.company_location = c.company_location and a.sign_pk *= c.front_sign group by a.sign_pk having count(c.front_sign) = 0 just to do the front sign, and then repeat for the back, but that won't run because c is an inner member of an outer join, and also in an inner join. This whole thing is fairly convoluted, but if anyone can make sense of it, I'll be your best friend.

    Read the article

  • Is it a good idea to use MySQL and Neo4j together?

    - by Sanoj
    I will make an application with a lot of similar items (millions), and I would like to store them in a MySQL database, because I would like to do a lot of statistics and search on specific values for specific columns. But at the same time, I will store relations between all the items, that are related in many connected binary-tree-like structures (transitive closure), and relation databases are not good at that kind of structures, so I would like to store all relations in Neo4j which have good performance for this kind of data. My plan is to have all data except the relations in the MySQL database and all relations with item_id stored in the Neo4j database. When I want to lookup a tree, I first search the Neo4j for all the item_id:s in the tree, then I search the MySQL-database for all the specified items in a query that would look like: SELECT * FROM items WHERE item_id = 45 OR item_id = 345435 OR item_id = 343 OR item_id = 78 OR item_id = 4522 OR item_id = 676 OR item_id = 443 OR item_id = 4255 OR item_id = 4345 Is this a good idea, or am I very wrong? I haven't used graph-databases before. Are there any better approaches to my problem? How would the MySQL-query perform in this case?

    Read the article

  • Mysql and PHP - Reading multiple insert queries from a file and executing at runtime

    - by SpikETidE
    Hi everyone... I am trying out a back-up and restore system. Basically, what i try to do is, when creating the back up i make a file which looks like DELETE FROM bank_account; INSERT INTO bank_account VALUES('1', 'IB6872', 'Indian Bank', 'Indian Bank', '2', '1', '0', '0000-00-00 00:00:00', '1', '2010-04-13 17:09:05');INSERT INTO bank_account VALUES('2', 'IB7391', 'Indian Bank', 'Indian Bank', '3', '1', '0', '0000-00-00 00:00:00', '1', '2010-04-13 17:09:32'); and so on and so forth. When i restore the db i just read the query from the file, save it to a string and then execute it over the DB using mysql_query(); The problem is, when i run the query through mysql_query(), the execution stops after the delete query with the error 'Error in syntax near '; INSERT INTO bank_account VALUES('1', 'IB6872', 'Indian Bank', 'Indian Bank', '2',' at line 1. But when i run the queries directly over the Db, using phpmyadmin it executes without any errors. As far as i can see, i can't notice any syntax error in the query. Can anyone point out where there might be a glitch...? Thanks and regards....

    Read the article

  • How do I display java.lang.* object allocations in Eclipse profiler?

    - by Martin Wickman
    I am profiling an application using the Eclipse profiler. I am particularly interested in number of allocated object instances of classes from java.lang (for instance java.lang.String or java.util.HashMap). I also want to know stuff like number of calls to String.equals() etc. I use the "Object Allocations" tab and I shows all classes in my application and a count. It also shows all int[], byte[], long[] etc, but there is no mention of any standard java classes. For instance, this silly code: public static void main(String[] args) { Object obj[] = new Object[1000]; for (int i = 0; i < 1000; i++) { obj[i] = new StringBuffer("foo" + i); } System.out.println (obj[30]); } Shows up in the Object Allocations tab as 7 byte[]s, 4 char[]s and 2 int[]s. It doesn't matter if I use 1000 or 1 iterations. It seems the profiler simply ignores everything that is in any of the java.* packages. The same applies to Execution Statistics as well. Any idea how to display instances of java.* in the Eclipse Profiler?

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • Using LINQ to map dynamically (or construct projections)

    - by CodeGrue
    I know I can map two object types with LINQ using a projection as so: var destModel = from m in sourceModel select new DestModelType {A = m.A, C = m.C, E = m.E} where class SourceModelType { string A {get; set;} string B {get; set;} string C {get; set;} string D {get; set;} string E {get; set;} } class DestModelType { string A {get; set;} string C {get; set;} string E {get; set;} } But what if I want to make something like a generic to do this, where I don't know specifically the two types I am dealing with. So it would walk the "Dest" type and match with the matching "Source" types.. is this possible? Also, to achieve deferred execution, I would want it just to return an IQueryable. For example: public IQueryable<TDest> ProjectionMap<TSource, TDest>(IQueryable<TSource> sourceModel) { // dynamically build the LINQ projection based on the properties in TDest // return the IQueryable containing the constructed projection } I know this is challenging, but I hope not impossible, because it will save me a bunch of explicit mapping work between models and viewmodels.

    Read the article

  • How to catch YouTube embed code and turn into URL

    - by Jonathan Vanasco
    I need to strip YouTube embed codes down to their URL only. This is the exact opposite of all but one question on StackOverflow. Most people want to turn the URL into an embed code. This question addresses the usage patttern I want, but is tied to a specific embed code's regex ( Strip YouTube Embed Code Down to URL Only ) I'm not familiar with how YouTube has offered embeds over the years - or how the sizes differ. According to their current site, there are 2 possible embed templates and a variety of options. If that's it, I can handle a regex myself -- but I was hoping someone had more knowledge they could share, so I could write a proper regex pattern that matches them all and not run into endless edge-cases. The full use case scenario : user enters content in web based wysiwig editor backend cleans out youtube & other embed codes; reformats approved embeds into an internal format as the text is all converted to markdown. on display, appropriate current template/code display for youtube or other 3rd party site is generated At a previous company, our tech-team devised a plan where YouTube videos were embedded by listing the URL only. That worked great , but it was in a CMS where everyone was trained. I'm trying to create a similar storage, but for user-generated-content.

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Memory leaks while using array of double

    - by Gacek
    I have a part of code that operates on large arrays of double (containing about 6000 elements at least) and executes several hundred times (usually 800) . When I use standard loop, like that: double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... } The memory usage rises for about 40MB (from 40MB at the beggining of the loop, to the 80MB at the end). When I force to use the garbage collector to execute at every iteration, the memory usage stays at the level of 40MB (the rise is unsignificant). double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... GC.Collect() } But the execution time is 3 times longer! (it is crucial) How can I force the C# to use the same area of memory instead of allocating new ones? Note: I have the access to the code of someObject class, so if it would be needed, I can change it.

    Read the article

  • How do I make this nested for loop, testing sums of cubes, more efficient?

    - by Brian J. Fink
    I'm trying to iterate through all the combinations of pairs of positive long integers in Java and testing the sum of their cubes to discover if it's a Fibonacci number. I'm currently doing this by using the value of the outer loop variable as the inner loop's upper limit, with the effect being that the outer loop runs a little slower each time. Initially it appeared to run very quickly--I was up to 10 digits within minutes. But now after 2 full days of continuous execution, I'm only somewhere in the middle range of 15 digits. At this rate it may end up taking a whole year just to finish running this program. The code for the program is below: import java.lang.*; import java.math.*; public class FindFib { public static void main(String args[]) { long uLimit=9223372036854775807L; //long maximum value BigDecimal PHI=new BigDecimal(1D+Math.sqrt(5D)/2D); //Golden Ratio for(long a=1;a<=uLimit;a++) //Outer Loop, 1 to maximum for(long b=1;b<=a;b++) //Inner Loop, 1 to current outer { //Cube the numbers and add BigDecimal c=BigDecimal.valueOf(a).pow(3).add(BigDecimal.valueOf(b).pow(3)); System.out.print(c+" "); //Output result //Upper and lower limits of interval for Mobius test: [c*PHI-1/c,c*PHI+1/c] BigDecimal d=c.multiply(PHI).subtract(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)), e=c.multiply(PHI).add(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)); //Mobius test: if integer in interval (floor values unequal) Fibonacci number! if (d.toBigInteger().compareTo(e.toBigInteger())!=0) System.out.println(); //Line feed else System.out.print("\r"); //Carriage return instead } //Display final message System.out.println("\rDone. "); } } Now the use of BigDecimal and BigInteger was delibrate; I need them to get the necessary precision. Is there anything other than my variable types that I could change to gain better efficiency?

    Read the article

  • Sourcing a shell script, while running with sudo

    - by WishCow
    I would like to write a shell script that sets up a mercurial repository, and allow all users in the group "developers" to execute this script. The script is owned by the user "hg", and works fine when ran. The problem comes when I try to run it with another user, using sudo, the execution halts with a "permission denied" error, when it tries to source another file. The script file in question: create_repo.sh #!/bin/bash source colors.sh REPOROOT="/srv/repository/mercurial/" ... rest of the script .... Permissions of create_repo.sh, and colors.sh: -rwxr--r-- 1 hg hg 551 2011-01-07 10:20 colors.sh -rwxr--r-- 1 hg hg 1137 2011-01-07 11:08 create_repo.sh Sudoers setup: %developer ALL = (hg) NOPASSWD: /home/hg/scripts/create_repo.sh What I'm trying to run: user@nebu:~$ id uid=1000(user) gid=1000(user) groups=4(adm),20(dialout),24(cdrom),46(plugdev),105(lpadmin),113(sambashare),116(admin),1000(user),1001(developer) user@nebu:~$ sudo -l Matching Defaults entries for user on this host: env_reset User user may run the following commands on this host: (ALL) ALL (hg) NOPASSWD: /home/hg/scripts/create_repo.sh user@nebu:~$ sudo -u hg /home/hg/scripts/create_repo.sh /home/hg/scripts/create_repo.sh: line 3: colors.sh: Permission denied So the script is executed, but halts when it tries to include the other script. I have also tried using: user@nebu:~$ sudo -u hg /bin/bash /home/hg/scripts/create_repo.sh Which gives the same result. What is the correct way to include another shell script, if the script may be ran with a different user, through sudo?

    Read the article

  • Updating Android Tab Icons

    - by lnediger
    I have an activity which has a TabHost containing a set of TabSpecs each with a listview containing the items to be displayed by the tab. When each TabSpec is created, I set an icon to be displayed in the tab header. The TabSpecs are created in this way within a setupTabs() method which loops to create the appropriate number of tabs: TabSpec ts = mTabs.newTabSpec("tab"); ts.setIndicator("TabTitle", iconResource); ts.setContent(new TabHost.TabContentFactory( { public View createTabContent(String tag) { ... } }); mTabs.addTab(ts); There are a couple instances where I want to be able to change the icon which is displayed in each tab during the execution of my program. Currently I am deleting all the tabs, and calling the above code again to re-create them. mTabs.getTabWidget().removeAllViews(); mTabs.clearAllTabs(true); setupTabs(); Is there a way to replace the icon that is being displayed without deleting and re-creating all of the tabs?

    Read the article

  • horizontal uiview's controls unresponsive.. or how to foul up a view hierarchy

    - by Oldmicah
    Hello all, I'm working on an app that has two sections, a config section and a results section. My config section needs to be 2 separate views (horizontal and vert, and yes, I can hear the intake of breath from here), with one rotatable view for the results. b/c of layout restraints and a lot of pain around rotation, I'm not using a navigation controller. I've been experiencing the joys of rotation experimentation and have settled upon keeping my views contained as subviews of my view controller. i.e. view controller.view.subviews = configH, configV, and results. I then use the controller.view bringSubviewToFront to bring the either the configH, configV, or the result view to the front. Rotation works-queue(humor intended) the angelic choirs... almost. What's happening is that my configV button's are responsive, but when the device (or simulator) is rotated, my configH controls are not. (configV is the second subview added, but the first one to be brought to the front because app comes up in portrait mode) The controls on the results view also work. Plan B was to assign the controller.view to configH, configV, or results. All of my controls now work, but rotation is now fouled up. Question 1: Is there a better way to do this? (a horizontal and vertical config view and a rotatable results view) Question 2: Does the above suggest a design issue, or is it more likely that my addled brain is just missing something in my own code. (nothing from the peanut gallery please) many thanks!

    Read the article

  • What does it mean for an OS to "execute within user processes"? Do any modern OS's use that approach

    - by Chris Cooper
    I have recently become interested in operating system, and a friend of mine lent me a book called Operating Systems: Internals and Design Principles (I have the third edition), published in 1998. It's been a very interesting book so far, but I have come to the part dealing with process control, and it's using UNIX System V as one of its examples of an operating system that executes within user processes. This concept has struck me as a little strange. First of all, does this mean that OS instructions and data are stored in each user of the processes? Probably not, because that would be an absurdly redundant scheme. But if not, then what does it mean to "execute within" a user process? Do any modern operating systems use this approach? It seems much more logical to have the operating system execute as its own process, or even independently of all processes, if you're short on memory. All the inter-accessiblilty of process data required for this layout seems to greatly complicate things. (But maybe that's just because I don't quite get the concept ;D) Here is what the book says: "Execution within User Processes: An alternative that is common with operation systems on smaller machines is to execute virtually all operating system software in the context of a user process. ... "

    Read the article

  • Using Dispose on a Singleton to Cleanup Resources

    - by ImperialLion
    The question I have might be more to do with semantics than with the actual use of IDisposable. I am working on implementing a singleton class that is in charge of managing a database instance that is created during the execution of the application. When the application closes this database should be deleted. Right now I have this delete being handled by a Cleanup() method of the singleton that the application calls when it is closing. As I was writing the documentation for Cleanup() it struck me that I was describing what a Dispose() method should be used for i.e. cleaning up resources. I had originally not implemented IDisposable because it seemed out of place in my singleton, because I didn't want anything to dispose the singleton itself. There isn't currently, but in the future might be a reason that this Cleanup() might be called but the singleton should will need to still exist. I think I can include GC.SuppressFinalize(this); in the Dispose method to make this feasible. My question therefore is multi-parted: 1) Is implementing IDisposable on a singleton fundamentally a bad idea? 2) Am I just mixing semantics here by having a Cleanup() instead of a Dispose() and since I'm disposing resources I really should use a dispose? 3) Will implementing 'Dispose()' with GC.SuppressFinalize(this); make it so my singleton is not actually destroyed in the case I want it to live after a call to clean-up the database.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >