Search Results

Search found 7742 results on 310 pages for 'bin packing'.

Page 6/310 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Hugh Bin-Haad's SQL Rumours

    - by Paul Nielsen
    Insider rumours and gossip from the murky world of the Database Industry, and from the colourful characters that inhabit it http://www.simple-talk.com/sql/editors-corner/insider-insights/ Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Hugh Bin-Haad's SQL Rumours

    - by Paul Nielsen
    Insider rumours and gossip from the murky world of the Database Industry, and from the colourful characters that inhabit it http://www.simple-talk.com/sql/editors-corner/insider-insights/ Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Installed Ruby 1.9.2 but new gems won't create scripts into /usr/bin

    - by karatedog
    I had previously ruby 1.8 on my Ubuntu 10.10, which I removed through Synaptics. Then I have installed ruby 1.9.1 also via Synaptics (which is then saying that itself is version 1.9.2). Then I installed ruby-debug19 and rspec gems with sudo gem install ruby-debug19 rspec However I can't start rdebug or rspec, but I can invoke the debugger from inside my ruby script, so the debugger is working. I inspected the starting scipts rdebug and rspec and then I realized that they are still old scripts back from ruby1.8 time. In other worlds, the current 1.9 install of these gems haven't created the starting scripts anywhere. What is the easiest solution for a lazy soul like me? It looks like removing-reinstalling ruby 1.9.2 won't help, and installing these gems over and over againg won't create the starting scripts.

    Read the article

  • Programatically Determining Bin Path

    - by Andy
    I'm working on a web app called pj and there is a bin file and a src folder. The relative paths before I deploy the app will look something like: pj/bin and pj/src/pj/script.py. However, after deployment, the relative paths will look like: pj_dep/deployed/bin and pj_dep/deployed/lib/python2.6/site-packages/pj/script.py Question: Within script.py, I am trying to find the path of a file in the bin directory. This leads to 2 different behaviors in the dev and deployment environment. If I do os.path.join(os.path.dirname(__file__), 'bin') to try to get the path for the dev environment, I will have a different path for the deployment environment. Is there a more generalized way I can find the bin directory so that I do not need to rely on an if statement to determine how many directories to go up based on the current env? This doesn't seem flexible and might cause other issues later on when the code is moved.

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Packing values into a single int

    - by user303907
    Hello, Let's say I have a couple of variables like apple, orange, banana I have 8 apples, 1 orange, 4 bananas. Is it possible to somehow convert those values into a single integer and also revert back to their original values based on the computed integer value? I found an example online. int age, gender, height; short packed_info; . . . // packing packed_info = (((age << 1) | gender) << 7) | height; . . . // unpacking height = packed_info & 0x7f; gender = (packed_info >>> 7) & 1; age = (packed_info >>> 8); But it doesn't seem to work as it should when I entered random numbers.

    Read the article

  • How to assign output bin on HP4730?

    - by user38138
    We have an application that prints batches of invoices and when two users print same time, their jobs get interspersed because the application actually generates a separate job for each invoice. The HP4730 has a bin for phocopies/fax and the bulk bin. A proposal was to create a separate printer definition for each user, and somehow map their output to a different bin to keep their jobs "together". However, we can't see any setting to control this on the printer properties. Does anyone know if thats possible? To make it more interesting, this is within Citrix... Help!

    Read the article

  • configuring cgi-bin using .htaccess

    - by Alexandru
    I'm trying to configure a directory as cgi-bin using .htaccess, but when I try to access the executables, the files are downloaded. I'm using apache2.2. What is the problem? My .htaccess looks like: # cat www/cgi-bin/.htaccess Options +ExecCGI AddHandler cgi-script cgi pl File permissions are # ls -1la www/cgi-bin/ total 60 drwxr-xr-x 2 root root 4096 iun 10 19:22 . drwxr-xr-x 5 root root 4096 iun 10 19:18 .. -rw-r--r-- 1 root root 46 iun 10 19:23 .htaccess -rwxr-xr-x 1 root root 15358 iun 10 19:23 paperload.cgi -rwxr-xr-x 1 root root 12728 iun 10 19:23 papers.cgi -rwxr-xr-x 1 root root 12593 iun 10 19:23 paperview.cgi

    Read the article

  • How to disable Empty Recycle Bin confirmation dialog?

    - by kjo
    In Windows 7 (at least) when one chooses Empty Recycle Bin from the RB menu, one gets prompted with a dialog like: Are you sure you want to permanently delete these 11 items? (modulo the actual number of items mentioned). Is there a way to disable this dialog (without disabling the use of the RB altogether)? NOTE 1: This dialog comes up even if one has unchecked the box labeled "Display delete confirmation dialog" in the RB Properties dialog (as long as the RB has not been disabled). NOTE 2: As alluded to above, any answer that entails disabling the use of the Recycle Bin altogether is explicitly ruled out. This includes any answer that involves selecting the button labeled "Don't move files to the Recycle Bin. Remove files immediately when deleted." in the RB Preferences window. NOTE 3: This question has nothing to do with Outlook Express.

    Read the article

  • How to get the PID of a process started by /bin/su -c

    - by crash3k
    I'm writing a init.d-script for an java-app. But the java-app should be run by another user. (The OS I'm using is Debian Squeeze.) I already got this: /bin/su - $USER - c "cd $PATH;echo $PASSWORD | $JAVA -Xmx256m -jar $PATH/app.jar -d > /dev/null" & PID=$! /bin/su - $USER - c "echo $PID > $PIDFILE" But this will of course only save the pid of the "/bin/su"-process instead of the pid of the created java-process.

    Read the article

  • Lost Linux root password - Recovery mode and init=/bin/bash fail

    - by Albeit
    I lost/forgot the root password to a server sitting beside me and am trying to reset it. I would rather not have to wipe and re-install or use a Live CD (server is running Ubuntu Server 12.04). What I've tried so far... 1) Boot into "Recovery mode" from Grub2 boot menu then drop into root shell prompt. I am prompted to "Give root password for maintenance". No-go. 2) Change the boot parameters for the main boot option to include "rw" and "init=/bin/bash". When I then boot with Ctrl-X, the screen goes black, and nothing happens (I've waited five minutes). init=/bin/sh and init=/bin/static-sh both do the same thing, while init=/sbin/init boots as normal. Is there anything else I can try to reset the root password? Thank you!

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (100)

    - by user67011
    Hello, I am running on xen, Debian 5.0-i386-default. I haven't touched my vps in 2 months then last night I ran the following command: myserver:/usr/bin# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: makepasswd The following packages will be upgraded: libc6 libc6-dev libc6-xen libmysqlclient15off locales mysql-client mysql-client-5.0 mysql- common mysql-server mysql-server-5.0 10 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. Need to get 0B/50.1MB of archives. After this operation, 483kB of additional disk space will be used. Do you want to continue [Y/n]? y Preconfiguring packages ... E: Sub-process /usr/bin/dpkg returned an error code (100) I googled and it seems to be a permission thing for "dpkg". However, I cd into /usr/bin and there's no dpkg binary!!! Please help thanks

    Read the article

  • Installing Rails 3 - /usr/local/bin/rails: No such file or directory

    - by viatropos
    I just ran these two commands: sudo gem install rails --pre sudo gem install railties --pre Now when I run rails myapp, I get this: -bash: /usr/local/bin/rails: No such file or directory Here's some system info: $ ruby -v ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin9.7.0] $ sudo gem update --system Updating RubyGems Nothing to update I tried copy/pasting the bin/rails file into /usr/local/bin/rails, and changing permissions to sudo chmod 755 /usr/local/bin/rails, but that doesn't work. Any ideas how to get up and running?

    Read the article

  • Can't configure PAM + LDAP on Debian Lenny - Getting error=49 on server logs

    - by Jorge Suárez de Lis
    I've been migrating some servers and desktops using Ubuntu 10.04 from getting the users from an old OpenLDAP implementation to a newer Centos Active Directory. I haven't had any problems so far, until I reached a Debian Lenny server. I've set up the server as the others, setting /etc/ldap.conf and /etc/ldap/ldap.conf. However, when I issue "getent passwd", I get nothing from the LDAP server. Reading the pam_ldap manpage, I realized that /etc/ldap.conf was not an accepted file by pam_ldap -it worked with Ubuntu though-, so I renamed it to /etc/pam_ldap.conf. Same result. However, once I've changed the name of this file, when I login using SSH I get this on the LDAP server logs: [20/Jul/2012:11:19:40 +0200] conn=16501 fd=155 slot=155 connection from x.x.x.50 to 10.1.176.237 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 RESULT err=49 tag=97 nentries=0 etime=0 The password isn't working. I don't know that could be wrong, anything else seems to be OK. That user/password is working from another clients: [20/Jul/2012:11:29:39 +0200] conn=16528 fd=188 slot=188 connection from x.x.x.224 to 10.1.176.237 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=jorge.suarez,ou=people,ou=citius,dc=inv,dc=usc,dc=es" I'm using SSHA for storing passwords on the LDAP server. Maybe this is not supported by Debian Lenny? On pam_ldap.conf, I've set up this, as in all the other servers: # Do not hash the password at all; presume # the directory server will do it, if # necessary. This is the default. pam_password md5 Also tried clear, but it didn't work. Anyways, it's weird that issuing getent passwd still gets me no users. However, if I use pamtest from the package libpam-dotfile to test login, it works. # pamtest ssh jorge.suarez Trying to authenticate <jorge.suarez> for service <ssh>. Password: Authentication successful. # pamtest foo jorge.suarez Trying to authenticate <jorge.suarez> for service <foo>. Password: Authentication successful. But "su" won't work also: # su jorge.suarez Id. descoñecido: jorge.suarez Just the output from getent passwd : # getent passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/bin/sh man:x:6:12:man:/var/cache/man:/bin/sh lp:x:7:7:lp:/var/spool/lpd:/bin/sh mail:x:8:8:mail:/var/mail:/bin/sh news:x:9:9:news:/var/spool/news:/bin/sh uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh list:x:38:38:Mailing List Manager:/var/list:/bin/sh irc:x:39:39:ircd:/var/run/ircd:/bin/sh gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh nobody:x:65534:65534:nobody:/nonexistent:/bin/sh libuuid:x:100:101::/var/lib/libuuid:/bin/sh Debian-exim:x:101:103::/var/spool/exim4:/bin/false statd:x:102:65534::/var/lib/nfs:/bin/false sshd:x:104:65534::/var/run/sshd:/usr/sbin/nologin luser:x:1000:1000:Usuario local de Burdeos,,,:/home/luser:/bin/bash messagebus:x:105:107::/var/run/dbus:/bin/false sge-admin:x:1001:1001:Administrador do SGE,,,:/home/cluster/sge-admin:/bin/bash ntp:x:107:110::/home/ntp:/bin/false haldaemon:x:108:111:Hardware abstraction layer,,,:/var/run/hald:/bin/false vde2-net:x:109:114::/var/run/vde2:/bin/false uml-net:x:110:115::/home/uml-net:/bin/false polkituser:x:111:116:PolicyKit,,,:/var/run/PolicyKit:/bin/false Debian-pxe:x:113:65534:Dummy user for Debian pxe package,,,:/home/Debian-pxe:/bin/false Nscd was stopped from the beginning.

    Read the article

  • 500 internal server error running php file in cgi-bin

    - by vvvvvvv
    500 internal server error is shown when i access http://mysite.com/cgi-bin/test.php test.php <p> title here</p> <?php echo "hi"; ?> error log shows (8)Exec format error: exec of '/var/www/cgi-bin/test.php' failed'. Premature end of script headers: test.php. solved it by adding AddHandler application/x-httpd-php .php

    Read the article

  • lwp-rget changes file format automatically to .bin?

    - by Hector Tosado Jimenez
    Im trying to recursively get an html page to get everything that's there and save it to my directory. Lwp-rget from perl allows me to do this but I'm having the problem that its getting all the files and changing them from .rpm, .xml, .html, etc to .bin. I've been trying to maybe use the --keepext=application/xml or any type but it continues to save the file as .bin. Any way I can stop that auto formatting? Thanks.

    Read the article

  • What is /usr/bin/[ ?

    - by Josh
    I was just poking around in /usr/bin and I found an ELF binary file called [. /usr/bin/[. I have never heard of this file and my first thought was that it was a clever way of hiding a program, possibly a trojan. However it's present on all my CentOS servers and seems to have no manual entry. I can hazard a guess as to what it is but I was looking for a more authoritative answer...

    Read the article

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • what \bin to add to system Path env var from a jdk

    - by raticulin
    If you install the latest java 1.6 jdk, without installing the public jre option, you end up having two \bin dirs with java.exe: %JAVA_HOME%\jre\bin %JAVA_HOME%\bin if you compare those dirs, there are a few files that are identical (java.exe etc), and a bunch that are either in one or the other. So far I used to add %JAVA_HOME%\bin to my Path environment var, but now I am wondering, does it make a difference? Is there any side effect to choose one or the other? And would not be much cleaner if the installation had only one java.exe and \bin folder?

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • How do you configure recycle bins on roaming profiles?

    - by Zombian
    I copied the following from a post on the Spiceworks forum which remained unanswered: Is there any way to place the Recycle Bin back on the desktop of a Roaming Profile with the Desktop being redirected? I have used Google and can't find a straight forward answer. I am asking for people with experience in this. This is for a Windows XP machine. I saw mention of needing to use a program such as Undelete but I'm hoping that is not the case. Further explanation: I use redirected folders and whenever a user deletes something from their desktop,my documents it doesn't show up in the recycle bin. It doesn't appear in the recycle bin on the server either. Where is this data? I doubt it is permanently deleted. Is there a way to change the recycle bin on the users' desktop to display those files? Thank you!

    Read the article

  • How to execute with /bin/false shell

    - by Amar
    I am trying to setup per-user fastcgi scripts that will run each on a different port and with a different user. Here is example of my script: #!/bin/bash BIND=127.0.0.1:9001 USER=user PHP_FCGI_CHILDREN=2 PHP_FCGI_MAX_REQUESTS=10000 etc... However, if I add user with /bin/false (which I want, since this is about to be something like shared hosting and I don't want users to have shell access), the script is run under 1001, 1002 'user' which, as my Google searches showed, might be a security hole. My question is: Is it possible to allow user(s) to execute shell scripts but disable them so they cannot log in via SSH?

    Read the article

  • non-privileged normal user passing environment variables to /bin/login [closed]

    - by AAAAAAAA
    Suppose that in FreeBSD (or linux maybe) there is a non-privileged normal user (non-superuser). And there is a telnet standalone (I know that telnet is usually run under inetd) running under (owned by) this user. (Suppose that there was no original, root-owned telnet running.) This telnet server is programmed so that it does not check ld_* environment variables before passing it to /bin/login owned by root that has setuid set up. The question would be: 1. Will this telnet work? 2. If it does work, will it even be able to pass environment variables to /bin/login?

    Read the article

  • greengeeks drupal install imagemagik 'path /usr/bin/convert' does not exists error

    - by letapjar
    I just signed up with greengeeks. I have a drupal install (6.19) on my public_html directory. The ImageMagic Toolkit can't find the binary - the error I get is "the path /usr/bin/convert" does not exist. when I use a terminal and do 'which convert' it shows /usr/bin/convert also, I have a second drupal install in an addon domain - it's home directory is above the public_html directory (in a directory called '/home/myusername/addons/seconddomain') The drupal install in the addon domain finds the imagemagick binary just fine. I am at a total loss as to why the original install cannot find the binary. The tech support guys at greengeeks have no clue either. Any ideas of things to try?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >