Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 142/780 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Cold Start

    - by antony.reynolds
    Well we had snow drifts 3ft deep on Saturday so it must be spring time.  In preparation for Spring we decided to move the lawn tractor.  Of course after sitting in the garage all winter it refused to start.  I then come into the office and need to start my 11g SOA Suite installation.  I thought about this and decided my tractor might be cranky but at least I can script the startup of my SOA Suite 11g installation. So with this in mind I created 6 scripts.  I created them for Linux but they should translate to Windows without too many problems.  This is left as an exercise to the reader, note you will have to hardcode more than I did in the Linux scripts and create separate script files for the sqlplus and WLST sections. Order to start things I believe there should be order in all things, especially starting the SOA Suite.  So here is my preferred order. Start Database This is need by EM and the rest of SOA Suite so best to start it before the Admin Server and managed servers. Start Node Manager on all machines This is needed if you want the scripts to work across machines. Start Admin Server Once this is done in theory you can manually stat the managed servers using WebLogic console.  But then you have to wait for console to be available.  Scripting it all is quicker and easier way of starting. Start Managed Servers & Clusters Best to start them one per physical machine at a time to avoid undue load on the machines.  Non-clustered install will have just soa_server1 and bam_serv1 by default.  Clusters will have at least SOA and BAM clusters that can be started as a group or individually.  I have provided scripts for standalone servers, but easy to change them to work with clusters. Starting Database I have provided a very primitive script (available here) to start the database, the listener and the DB console.  The section highlighted in red needs to match your database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#####################" echo "# Starting Database #" echo "#####################" sqlplus / as sysdba <<-EOF startup exit EOF echo "#####################" echo "# Starting Listener #" echo "#####################" lsnrctl start echo "######################" echo "# Starting dbConsole #" echo "######################" emctl start dbconsole read -p "Hit <enter> to continue" Starting SOA Suite My script for starting the SOA Suite (available here) breaks the task down into five sections. Setting the Environment First set up the environment variables.  The variables highlighted in red probably need changing for your environment. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME Starting the Node Manager I start node manager with a nohup to stop it exiting when the script terminates and I redirect the standard output and standard error to a file in a logs directory. cd $DOMAIN_HOME echo "#########################" echo "# Starting Node Manager #" echo "#########################" nohup $WL_HOME/server/bin/startNodeManager.sh >logs/NodeManager.out 2>&1 & Starting the Admin Server I had problems starting the Admin Server from Node Manager so I decided to start it using the command line script.  I again use nohup and redirect output. echo "#########################" echo "# Starting Admin Server #" echo "#########################" nohup ./startWebLogic.sh >logs/AdminServer.out 2>&1 & Starting the Managed Servers I then used WLST (WebLogic Scripting Tool) to start the managed servers.  First I waited for the Admin Server to come up by putting a connect command in a loop.  I could have put the WLST commands into a separate script file but I wanted to reduce the number of files I was using and so used redirected input (here syntax). $ORACLE_HOME/common/bin/wlst.sh <<-EOF import time sleep=time.sleep print "#####################################" print "# Waiting for Admin Server to Start #" print "#####################################" while True:   try:     connect(adminServerName="AdminServer")     break   except:     sleep(10) I then start the SOA server and tell WLST to wait until it is started before returning.  If starting a cluster then the start command would be modified accordingly to start the SOA cluster. print "#######################" print "# Starting SOA Server #" print "#######################" start(name="soa_server1", block="true") I then start the BAM server in the same way as the SOA server. print "#######################" print "# Starting BAM Server #" print "#######################" start(name="bam_server1", block="true") EOF Finally I let people know the servers are up and wait for input in case I am running in a separate window, in which case the result would be lost without the read command. echo "#####################" echo "# SOA Suite Started #" echo "#####################" read -p "Hit <enter> to continue" Stopping the SOA Suite My script for shutting down the SOA Suite (available here)  is basically the reverse of my startup script.  After setting the environment I connect to the Admin Server using WLST and shut down the managed servers and the admin server.  Again the script would need modifying for a cluster. Stopping the Servers If I cannot connect to the Admin Server I try to connect to the node manager, in case the Admin Server is down but the managed servers are up. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME cd $DOMAIN_HOME $MW_HOME/Oracle_SOA/common/bin/wlst.sh <<-EOF try:   print("#############################")   print("# Connecting to AdminServer #")   print("#############################")   connect(username='weblogic',password='welcome1',url='t3://localhost:7001') except:   print "#########################################"   print "#   Unable to connect to Admin Server   #"   print "# Attempting to connect to Node Manager #"   print "#########################################"   nmConnect(domainName=os.getenv("DOMAIN_NAME")) print "#######################" print "# Stopping BAM Server #" print "#######################" shutdown('bam_server1') print "#######################" print "# Stopping SOA Server #" print "#######################" shutdown('soa_server1') print "#########################" print "# Stopping Admin Server #" print "#########################" shutdown('AdminServer') disconnect() nmDisconnect() EOF Stopping the Node Manager I stopped the node manager by searching for the java node manager process using the ps command and then killing that process. echo "#########################" echo "# Stopping Node Manager #" echo "#########################" kill -9 `ps -ef | grep java | grep NodeManager |  awk '{print $2;}'` echo "#####################" echo "# SOA Suite Stopped #" echo "#####################" read -p "Hit <enter> to continue" Stopping the Database Again my script for shutting down the database is the reverse of my start script.  It is available here.  The only change needed might be to the database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "######################" echo "# Stopping dbConsole #" echo "######################" emctl stop dbconsole echo "#####################" echo "# Stopping Listener #" echo "#####################" lsnrctl stop echo "#####################" echo "# Stopping Database #" echo "#####################" sqlplus / as sysdba <<-EOF shutdown immediate exit EOF read -p "Hit <enter> to continue" Cleaning Up Cleaning SOA Suite I often run tests and want to clean up all the log files.  The following script (available here) does this for the WebLogic servers in a given domain on a machine.  After setting the domain I just remove all files under the servers logs directories.  It also cleans up the log files I created with my startup scripts.  These scripts could be enhanced to copy off the log files if you needed them but in my test environments I don’t need them and would prefer to reclaim the disk space. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME echo "##########################" echo "# Cleaning SOA Log Files #" echo "##########################" cd $DOMAIN_HOME rm -Rf logs/* servers/*/logs/* read -p "Hit <enter> to continue" Cleaning Database I also created a script to clean up the dump files of an Oracle database instance and also the EM log files (available here).  This relies on the machine name being correct as the EM log files are stored in a directory that is based on the hostname and the Oracle SID. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#############################" echo "# Cleaning Oracle Log Files #" echo "#############################" rm -Rf $ORACLE_BASE/admin/$ORACLE_SID/*dump/* rm -Rf $ORACLE_HOME/`hostname`_$ORACLE_SID/sysman/log/* read -p "Hit <enter> to continue" Summary Hope you find the above scripts useful.  They certainly stop me hanging around waiting for things to happen on my test machine and make it easy to run a test, change parameters, bounce the SOA Suite and clean the logs between runs so I can see exactly what is happening. Now I need to get that mower started…

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • Error when reloading supervisord: unix:///tmp/supervisor.sock no such file

    - by Yarin
    I'm running supervisord on my CentOS 6 box like so, /usr/bin/supervisord -c /etc/supervisord.conf and when I launch supervisorctl all process status are fine, but if I try to reload using supervisorctl I get unix:///tmp/supervisor.sock no such file I'm using the same config file I've used successfully on other boxes, and im running everything as root. I can't undesrtand what the problem is... Config file: ; Sample supervisor config file. [unix_http_server] file=/tmp/supervisor.sock ; (the path to the socket file) ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) ;[inet_http_server] ; inet (TCP) server disabled by default ;port=127.0.0.1:9001 ; (ip_address:port specifier, *:port for all iface) ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) [supervisord] logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log) logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB) logfile_backups=10 ; (num of main logfile rotation backups;default 10) loglevel=info ; (log level;default info; others: debug,warn,trace) pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid) nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) ;umask=022 ; (process file creation umask;default 022) ;user=chrism ; (default is current user, required if root) ;identifier=supervisor ; (supervisord identifier, default is 'supervisor') ;directory=/tmp ; (default is not to cd during start) ;nocleanup=true ; (don't clean up tempfiles at start;default false) ;childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP) ;environment=KEY=value ; (key value pairs to add to environment) ;strip_ansi=false ; (strip ansi escape codes in logs; def. false) ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be ; added by defining them in separate rpcinterface: sections [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket ;username=chris ; should be same as http_username if set ;password=123 ; should be same as http_password if set ;prompt=mysupervisor ; cmd line prompt (default "supervisor") ;history_file=~/.sc_history ; use readline history if available ; The below sample program section shows all possible program subsection values, ; create one or more 'real' program: sections to be able to control them under ; supervisor. ;[program:foo] ;command=/bin/cat [program:embed_scheduler] command=/opt/web-apps/mywebsite/custom_process.py process_name=%(program_name)s_%(process_num)d numprocs=3 ;[program:theprogramname] ;command=/bin/cat ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=999 ; the relative start priority (default 999) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (default 10) ;stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions (def no adds) ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample eventlistener section shows all possible ; eventlistener subsection values, create one or more 'real' ; eventlistener: sections to be able to handle event notifications ; sent by supervisor. ;[eventlistener:theeventlistenername] ;command=/bin/eventlistener ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;events=EVENT ; event notif. types to subscribe to (req'd) ;buffer_size=10 ; event buffer queue size (default 10) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=-1 ; the relative start priority (default -1) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups ; # of stderr logfile backups (default 10) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample group section shows all possible group values, ; create one or more 'real' group: sections to create "heterogeneous" ; process groups. ;[group:thegroupname] ;programs=progname1,progname2 ; each refers to 'x' in [program:x] definitions ;priority=999 ; the relative start priority (default 999) ; The [include] section can just contain the "files" setting. This ; setting can list multiple files (separated by whitespace or ; newlines). It can also contain wildcards. The filenames are ; interpreted as relative to this file. Included files *cannot* ; include files themselves. ;[include] ;files = relative/directory/*.ini

    Read the article

  • Need help in setting lighttpd on Ubuntu 9.10

    - by hap497
    Hi, I am trying to run lighttpd on Ubuntu 9.10. I get the conf file from the doc directory of lighttpd source. $ sudo ./lighttpd -f lighttpd.conf $ ps -ef | grep lighttpd root 2094 1 0 19:40 ? 00:00:00 ./lighttpd -f lighttpd.conf This is my lighttpd.conf: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", # "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) #server.port = 81 ## bind to localhost (default: all interfaces) #server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 When I go to browser and hit 'http://127.0.0.1', I get link not found. Any idea?

    Read the article

  • ServerRoot in my lighttpd.conf

    - by michael
    Hi, I have use the following example lighttpd.conf to launch my lighttpd. Can you please tell me where is my 'ServerRoot'? # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.socket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you.

    Read the article

  • MSMQ messages using HTTP just won't get delivered

    - by John Breakwell
    I'm starting off the blog with a discussion of an unusual problem that has hit a couple of my customers this month. It's not a problem you'd expect to bump into and the solution is potentially painful. Scenario You want to make use of the HTTP protocol to send MSMQ messages from one machine to another. You have installed HTTP support for MSMQ and have addressed your messages correctly but they will not leave the outgoing queue. There is no configuration for HTTP support - setup has already done all that for you (although you may want to check the most recent "Installation of the MSMQ HTTP Support Subcomponent" section of MSMQINST.LOG to see if anything DID go wrong) - so you can't tweak anything. Restarting services and servers makes no difference - the messages just will not get delivered. The problem is documented and resolved by Knowledgebase article 916699 "The message may not be delivered when you use the HTTP protocol to send a message to a server that is running Message Queuing 3.0". It is unlikely that you would be able to resolve the problem without the assistance of PSS because there are no messages that can be seen to assist you and only access to the source code exposes the root cause. As this communication is over HTTP, the IIS logs would be a good place to start. POST entries are logged which show that connectivity is working and message delivery is being attempted: #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2006-09-12 12:11:29 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2006-09-12 12:12:12 W3SVC1 10.1.17.219 POST /msmq/private$/test - 80 - 10.2.200.3 - 200 0 0 If you capture the traffic with Network Monitor you can see the POST being sent to the server but you also see a response being returned to the client: HTTP: Response to Client; HTTP/1.1; Status Code = 500 - Internal Server Error "Internal Server Error" means we can probably stop looking at IIS and instead focus on the Message Queuing ISAPI extension (Mqise.dll). MSMQ 3.0 (Windows XP and Windows Server 2003) comes with error logging enabled by default but the log files are in binary format - MSMQ 2.0 generated logging in plain text. The symbolic information needed for formatting the files is not currently publicly available so log files have to be sent in to Microsoft PSS.  Although this does mean raising a support case, formatting the log files to text and returning them to the customer shouldn't take long. Obviously the engineer analyses them for you - I just want to point out that you can see the logging output in text format if you want it. The important entries in the log for this problem are: [7]b48.928 09/12/2006-13:20:44.552 [mqise GetNetBiosNameFromIPAddr] ERROR:Failed to get the NetBios name from the DNS name, error = 0xea [7]b48.928 09/12/2006-13:20:44.552 [mqise RPCToServer] ERROR:RPC call R_ProcessHTTPRequest failed, error code = 1702(RPC_S_INVALID_BINDING) which allow a Microsoft escalation engineer to check the MQISE source code to see what is going wrong. This problem according to the article occurs when the extension tries to bind to the local MSMQ service after the extension receives a POST request that contains an MSMQ message. MSMQ resolves the server name by using the DNS host name but the extension cannot bind to the service because the buffer that MSMQ uses to resolve the server name is too small - server names that are exactly 15 characters long will not fit. RPC exception 0x6a6 (RPC_S_INVALID_BINDING) occurs in the W3wp.exe process but the exception is handled and so you do not receive an error message. The workaround is to rename the MSMQ server to something less than 15 characters. If the problem has only just been noticed in a production environment - an application may have been modified to get through a newly-implemented firewall, for example - then renaming is going to be an issue. Other applications may need to be reinstalled or modified if server names are hard-coded or stored in the registry. The renaming may also break a company naming convention where the name is built up from something like location+department+number. If you want to learn more about MSMQ logging then check out Chapter 15 of the MSMQ FAQ. In fact, even if you DON'T want to learn anything about MSMQ logging you should read the FAQ anyway as there is a huge amount of useful information on known issues and the like.

    Read the article

  • Add Social Elements to Your Gmail Contacts with Rapportive

    - by Matthew Guay
    Would you like to discover more about your contacts?  Xobni is a great tool for this in Outlook, and thanks to a small plugin for Gmail, you can get similar functionality right from your favorite webmail app. Setup Rapportive on Your Gmail Browse to the Rapportive site (link below), and click install to add it to your browser.  Rapportive currently only supports Firefox and Google Chrome.  In this test, we installed it on Google Chrome.  Notice that Chrome warns Rapportive may access your private data from Gmail, though Rapportive says that they only use this data securely on your computer or their servers. Next time you log into Gmail, open a message to see the new Rapportive sidebar.  Click Log in to get started. Choose if you want to let Rapportive to access your data. Finally, choose whether to stay logged into Rapportive or to log out when you log out of Gmail.   Using Rapportive Now, when you open an email, you should see more information about your contact on the right side of the message where you usually see Google AdSense ads. You may see an avatar, short bio, and links to their social networks.  You can add notes about a contact also, which lets you use Rapportive as a CRM. You may see more information on some contacts.  Here we see a contact that shows recent Tweets and links to several social networks. Take Rapportive Further You can add more features to Rapportive with Raplets, which are small extensions that add more information or CRM functionality.  To add these, click the Rapportive button on the top of Gmail, and select Add Raplets to Rapportive. Find a Raplet you want, and click Add This. A popup will open to give you more information about the Raplet; click the Add button at the bottom if you still want it. And, if you’re wish to close Rapportive without logging out of Gmail, click the Rapportive link in Gmail and select Log out. Conclusion Whether you want to find out more about your contacts or keep track of notes about them, Rapportive is a great way to do this from Gmail.  With tools like this, Gmail gets a bit more powerful and feels more like a desktop application. If you would like this type of functionality in Outlook, check out our article on how to power up Outlook’s search and contacts with Xobni. Add Rapportive to Gmail Similar Articles Productive Geek Tips How to Import Gmail Contacts Into Outlook 2007Enhance Your Gmail Account in ChromeFigure out which Online accounts are selling your email to spammersAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFix for New Contact Group Button Not Displaying in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • BIP BIServer Query Debug

    - by Tim Dexter
    With some help from Bryan, I have uncovered a way of being able to debug or at least log what BIServer is doing when BIP sends it a query request. This is not for those of you querying the database directly but if you are using the BIServer and its datamodel to fetch data for a BIP report. If you have written or used the query builder against BIServer and when you run the report it chokes with a cryptic message, that you have no clue about, read on. When BIP runs a piece of BIServer logical SQL to fetch data. It does not appear to validate it, it just passes it through, so what is BIServer doing on its end? As you may know, you are not writing regular physical sql its actually logical sql e.g. select Jobs."Job Title" as "Job Title", Employees."Last Name" as "Last Name", Employees.Salary as Salary, Locations."Department Name" as "Department Name", Locations."Country Name" as "Country Name", Locations."Region Name" as "Region Name" from HR.Locations Locations, HR.Employees Employees, HR.Jobs Jobs The tables might not even be a physical tables, we don't care, that's what the BIServer and its model are for. You have put all the effort into building the model, just go get me the data from where ever it might be. The BIServer takes the logical sql and uses its vast brain to work out what the physical SQL is, executes it and passes the result back to BIP. select distinct T32556.JOB_TITLE as c1, T32543.LAST_NAME as c2, T32543.SALARY as c3, T32537.DEPARTMENT_NAME as c4, T32532.COUNTRY_NAME as c5, T32577.REGION_NAME as c6 from JOBS T32556, REGIONS T32577, COUNTRIES T32532, LOCATIONS T32569, DEPARTMENTS T32537, EMPLOYEES T32543 where ( T32532.COUNTRY_ID = T32569.COUNTRY_ID and T32532.REGION_ID = T32577.REGION_ID and T32537.DEPARTMENT_ID = T32543.DEPARTMENT_ID and T32537.LOCATION_ID = T32569.LOCATION_ID and T32543.JOB_ID = T32556.JOB_ID ) Not a very tough example I know but you get the idea. How do I know what the BIServer is up to? How can I find out what the issue might be if BIServer chokes on my query? There are a couple of steps: In the Administrator tool you need to set the logging level for the Administrator user to something greater than the default '0'. '7' is going to give you the max. Just remember to take it back down after you have finished the debug. I needed to bounce my BIServer service Now here's the secret sauce. Prefix the following to your BIP query set variable LOGLEVEL = 7; Set the log level to that you have in the admin tool Now run your BIP report. With the prefix in place; BIServer will write to the NQQuery.log file. This is located in the ./OracleBI/server/Log directory. In there you are going to find the complete process the BIServer has gone through to try and get the data back for you A quick note, if the BIServer can, its going to hit that great BIEE cache to get your data and you may not see the full log. IF this is the case. Get inot hte Administration page (via the browser login) and clear out your BIP report cursor. Then re-run. This will hopefully help out if you are trying to debug that annoying BIP report that will not run or is getting some strange data. Don't forget to turn that logging level back down once you are done. This will avoid the DBA screaming at you for sucking up all the disk space on the system.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • 5.1 surround sound on Acer Aspire 5738ZG with Ubuntu 11.10

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • Move a sphere along the swipe?

    - by gameOne
    I am trying to get a sphere curl based on the swipe. I know this has been asked many times, but still it's yearning to be answered. I have managed to add force on the direction of the swipe and it works near perfect. I also have all the swipe positions stored in a list. Now I would like to know how can the curl be achieved. I believe the the curve in the swipe can be calculated by the Vector dot product If theta is 0, then there is no need to add the swipe. If it is not, then add the curl. Maybe this condition is redundant if I managed to find how to curl the sphere along the swipe position The code that adds the force to sphere based on the swipe direction is as below: using UnityEngine; using System.Collections; using System.Collections.Generic; public class SwipeControl : MonoBehaviour { //First establish some variables private Vector3 fp; //First finger position private Vector3 lp; //Last finger position private Vector3 ip; //some intermediate finger position private float dragDistance; //Distance needed for a swipe to register public float power; private Vector3 footballPos; private bool canShoot = true; private float factor = 40f; private List<Vector3> touchPositions = new List<Vector3>(); void Start(){ dragDistance = Screen.height*20/100; Physics.gravity = new Vector3(0, -20, 0); footballPos = transform.position; } // Update is called once per frame void Update() { //Examine the touch inputs foreach (Touch touch in Input.touches) { /*if (touch.phase == TouchPhase.Began) { fp = touch.position; lp = touch.position; }*/ if (touch.phase == TouchPhase.Moved) { touchPositions.Add(touch.position); } if (touch.phase == TouchPhase.Ended) { fp = touchPositions[0]; lp = touchPositions[touchPositions.Count-1]; ip = touchPositions[touchPositions.Count/2]; //First check if it's actually a drag if (Mathf.Abs(lp.x - fp.x) > dragDistance || Mathf.Abs(lp.y - fp.y) > dragDistance) { //It's a drag //Now check what direction the drag was //First check which axis if (Mathf.Abs(lp.x - fp.x) > Mathf.Abs(lp.y - fp.y)) { //If the horizontal movement is greater than the vertical movement... if ((lp.x>fp.x) && canShoot) //If the movement was to the right) { //Right move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("right "+(lp.x-fp.x));//MOVE RIGHT CODE HERE canShoot = false; //rigidbody.AddForce((new Vector3((lp.x-fp.x)/30,10,16))*power); StartCoroutine(ReturnBall()); } else { //Left move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("left "+(lp.x-fp.x));//MOVE LEFT CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } } else { //the vertical movement is greater than the horizontal movement if (lp.y>fp.y) //If the movement was up { //Up move float y = (lp.y-fp.y)/Screen.height*factor; float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,y,16))*power); Debug.Log("up "+(lp.x-fp.x));//MOVE UP CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } else { //Down move Debug.Log("down "+lp+" "+fp);//MOVE DOWN CODE HERE } } } else { //It's a tap Debug.Log("none");//TAP CODE HERE } } } } IEnumerator ReturnBall() { yield return new WaitForSeconds(5.0f); rigidbody.velocity = Vector3.zero; rigidbody.angularVelocity = Vector3.zero; transform.position = footballPos; canShoot =true; isKicked = false; } }

    Read the article

  • vmware player xorg driver broke after ubuntu 12.04 LTS automatic system update on 06/07/2014

    - by user291828
    I am running Ubuntu 12.04 LTS as a virtual machine on a Windows 7 host using vmware player 6.0.2. On Sat. 06/07/2014, Ubuntu performed an automatic system update as it does regularly, however after reboot, the display driver seems to be broken which causes the screen to split into many identical panels. This pretty much rendered the VM unusable. Below is the log entry for that update from /var/log/apt/history.log It appears that at least one of these updates have caused this problem. The exact same problem has been reported on the vmware forum as well, see https://communities.vmware.com/message/2388776#2388776 So far I have not been able so find a solution. Content in /var/log/apt/history.log Start-Date: 2014-06-07 15:04:46 Commandline: aptdaemon role='role-commit-packages' sender=':1.65' Install: linux-headers-3.2.0-64:amd64 (3.2.0-64.97), linux-image-3.2.0-64-generic:amd64 (3.2.0-64.97), linux-headers-3.2.0-64-generic:amd64 (3.2.0-64.97) Upgrade: iproute:amd64 (20111117-1ubuntu2.1, 20111117-1ubuntu2.3), libknewstuff3-4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdeclarative5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libnepomukquery4a:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libgnutls-openssl27:amd64 (2.12.14-5ubuntu3.7, 2.12.14-5ubuntu3.8), libthreadweaver4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdecore5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libnepomukutils4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libktexteditor4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), linux-generic:amd64 (3.2.0.63.75, 3.2.0.64.76), libkmediaplayer4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkrosscore4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libgnutls26:amd64 (2.12.14-5ubuntu3.7, 2.12.14-5ubuntu3.8), libgnutls26:i386 (2.12.14-5ubuntu3.7, 2.12.14-5ubuntu3.8), libsolid4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libnepomuk4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdnssd4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkparts4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), kdoctools:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libssl-dev:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), libssl-doc:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), libkidletime4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), linux-headers-generic:amd64 (3.2.0.63.75, 3.2.0.64.76), linux-image-generic:amd64 (3.2.0.63.75, 3.2.0.64.76), libkcmutils4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkfile4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkpty4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkntlm4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libplasma3:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkemoticons4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), kdelibs-bin:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdewebkit5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkjsembed4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkio5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkjsapi4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), openssl:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), kdelibs5-data:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), linux-libc-dev:amd64 (3.2.0-63.95, 3.2.0-64.97), libkde3support4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libknotifyconfig4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), kdelibs5-plugins:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkhtml5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdeui5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdesu5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libssl1.0.0:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), libssl1.0.0:i386 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14) End-Date: 2014-06-07 15:09:08

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • How do I get 5.1 surround sound working on an Acer Aspire 5738ZG?

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • Unable to install Wordpress on Amazon's EC2 instance due to missing php-mbstring

    - by alexus
    I've created a new instance on Amazon's EC2 and I'm trying in wordpress and it's failing due to php-mbstring: # yum install wordpress Loaded plugins: amazon-id, rhui-lb Resolving Dependencies --> Running transaction check ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-simplepie >= 1.3.1 for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-gd for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-PHPMailer for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-gd.x86_64 0:5.4.16-21.el7 will be installed --> Processing Dependency: libpng15.so.15(PNG15_0)(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libt1.so.5()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libpng15.so.15()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libXpm.so.4()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libX11.so.6()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch --> Processing Dependency: php-IDNA_Convert for package: php-simplepie-1.3.1-4.el7.noarch ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package libX11.x86_64 0:1.6.0-2.1.el7 will be installed --> Processing Dependency: libX11-common = 1.6.0-2.1.el7 for package: libX11-1.6.0-2.1.el7.x86_64 --> Processing Dependency: libxcb.so.1()(64bit) for package: libX11-1.6.0-2.1.el7.x86_64 ---> Package libXpm.x86_64 0:3.5.10-5.1.el7 will be installed ---> Package libpng.x86_64 2:1.5.13-5.el7 will be installed ---> Package php-IDNA_Convert.noarch 0:0.8.0-2.el7 will be installed --> Processing Dependency: php-mbstring for package: php-IDNA_Convert-0.8.0-2.el7.noarch ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch ---> Package t1lib.x86_64 0:5.1.2-14.el7 will be installed ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package libX11-common.noarch 0:1.6.0-2.1.el7 will be installed ---> Package libxcb.x86_64 0:1.9-5.el7 will be installed --> Processing Dependency: libXau.so.6()(64bit) for package: libxcb-1.9-5.el7.x86_64 ---> Package php-IDNA_Convert.noarch 0:0.8.0-2.el7 will be installed --> Processing Dependency: php-mbstring for package: php-IDNA_Convert-0.8.0-2.el7.noarch ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package libXau.x86_64 0:1.0.8-2.1.el7 will be installed ---> Package php-IDNA_Convert.noarch 0:0.8.0-2.el7 will be installed --> Processing Dependency: php-mbstring for package: php-IDNA_Convert-0.8.0-2.el7.noarch ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Finished Dependency Resolution Error: Package: php-PHPMailer-5.2.6-1.el7.noarch (epel) Requires: php-mbstring >= 5.1.0 Error: Package: php-IDNA_Convert-0.8.0-2.el7.noarch (epel) Requires: php-mbstring Error: Package: wordpress-3.9.1-1.el7.noarch (epel) Requires: php-mbstring Error: Package: php-simplepie-1.3.1-4.el7.noarch (epel) Requires: php-mbstring Error: Package: wordpress-3.9.1-1.el7.noarch (epel) Requires: php-enchant You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest # I'm using RHEL7: # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.0 (Maipo) # yum repolist Loaded plugins: amazon-id, rhui-lb repo id repo name status epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 4,325 rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Server 7 1 rhui-REGION-rhel-server-releases/7Server/x86_64 Red Hat Enterprise Linux Server 7 (RPMs) 4,447 repolist: 8,773 # a while back and another environment I had to run following command first in order to get access to php-mbstring: rhn-channel --add --channel=rhel-x86_64-server-optional-6 How do you do that in Amazon EC2?:

    Read the article

  • ERR_INCOMPLETE_CHUNKED_ENCODING apache 2.4

    - by Bujanca Mihai
    I upgraded my Ubuntu server to 14.04 and Apache 2.4.7. Now my images don't load and console yields net::ERR_INCOMPLETE_CHUNKED_ENCODING. Also, I can sometimes see some of the images load for a little while (1 sec max) and then they disappear. .htaccess RewriteEngine On # Serve the favicon file from img folder RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule ^(.*)$ /img/$1 [NC,L] # Redirect HTTP traffic to WWW subdomain RewriteCond %{HTTPS} off [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Redirect HTTPS traffic to WWW subdomain RewriteCond %{HTTPS} on [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] # Auto Versioning rules RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)\.[\d]+\.(css|js)$ $1.$2 [L] # Default Zend rewrite rules RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] VHost <VirtualHost *:80> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD-access.log combined </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-ssl-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire #<FilesMatch "\.(cgi|shtml|phtml|php)$"> # SSLOptions +StdEnvVars #</FilesMatch> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. #BrowserMatch ".*MSIE.*" \ # nokeepalive ssl-unclean-shutdown \ # downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> logs Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.3 OpenSSL/1.0.1f (internal dummy connection) 127.0.0.1 - - [25/Aug/2014:13:09:53 +0300] "GET /img/header/top-nav-separator.png HTTP/1.1" 200 462 "https://localhost/art" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36"

    Read the article

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

  • error initializing multiple configuration files

    - by lurscher
    Hi, during initialization startup on tomcat, the configurations are: 1) a webapp/WEB-INF/web.xml that imports yummy-servlet.xml in contextConfigLocation (although i'm aware that is not required since the servlet-name is yummy it will try to load yummy-servlet.xml by default) 2) a webapp/WEB-INF/yummy-servlet.xml that imports a spring/applicationContext-hibernate.xml file 3) a webapp/WEB-INF/spring/applicationContext-hibernate.xml that imports a applicationContext-dataSource.xml file 4) a webapp/WEB-INF/spring/applicationContext-dataSource.xml i'm getting errors about Failed to import bean definitions from relative location, but the stack trace is not very explicit about exactly what is the problem, i've been looking at these since yesterday and i really don't see any problem on the files my web.xml: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>hello-spring3-RC1</display-name> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/yummy-servlet.xml</param-value> </context-param> <!-- <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring/applicationContext-hibernate.xml</param-value> </context-param> --> <!-- Location of the Log4J config file, for initialization and refresh checks. Applied by Log4jConfigListener. --> <context-param> <param-name>log4jConfigLocation</param-name> <param-value>classpath:log4j.properties</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <servlet> <servlet-name>yummy</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>yummy</servlet-name> <url-pattern>*.html</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> </web-app> my yummy-servlet.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <import resource="spring/applicationContext-hibernate.xml"/> <context:component-scan base-package="com.mine.web.controllers"/> <bean id="jspViewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/WEB-INF/jsp/"/> <property name="suffix" value=".jsp"/> </bean> </beans> my applicationContext-hibernate.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <!-- import the dataSource definition --> <import resource="applicationContext-dataSource.xml"/> <!-- Configurer that replaces ${...} placeholders with values from a properties file --> <!-- (in this case, Hibernate-related settings for the sessionFactory definition below) --> <context:property-placeholder location="classpath:jdbc.properties"/> <context:property-placeholder location="classpath:hibernate.properties"/> <!-- Hibernate SessionFactory --> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" p:dataSource-ref="dataSource" p:mappingResources="hello.hbm.xml"> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> <prop key="hibernate.generate_statistics">${hibernate.generate_statistics}</prop> </props> </property> <property name="eventListeners"> <map> <entry key="merge"> <bean class="org.springframework.orm.hibernate3.support.IdTransferringMergeEventListener"/> </entry> </map> </property> </bean> <bean id="hibernateTemplate" class="org.springframework.orm.hibernate3.HibernateTemplate"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <!-- Transaction manager for a single Hibernate SessionFactory (alternative to JTA) --> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager" p:sessionFactory-ref="sessionFactory"/> <!-- ========================= BUSINESS OBJECT DEFINITIONS ========================= --> <!-- Activates various annotations to be detected in bean classes: Spring's @Required and @Autowired, as well as JSR 250's @Resource. --> <context:annotation-config/> <!-- Instruct Spring to perform declarative transaction management automatically on annotated classes. --> <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="EntityManager" class="com.mine.persistence.hibernate.HibernateHelloWorldDao"/> </beans> and my applicationContext-dataSource.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd"> <!-- Configurer that replaces ${...} placeholders with values from a properties file --> <!-- (in this case, JDBC-related settings for the dataSource definition below) --> <context:property-placeholder location="classpath:jdbc.properties"/> <context:property-placeholder location="classpath:hibernate.properties"/> <!-- data source using apache common dbcp pool manager <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url}" p:username="${jdbc.username}" p:password="${jdbc.password}"/>--> <!-- c3p0 pool manager data source --> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="${jdbc.driverClassName}"/> <property name="jdbcUrl" value="${jdbc.url}"/> <property name="user" value="${jdbc.username}"/> <property name="password" value="${jdbc.password}"/> <property name="initialPoolSize" value="${hibernate.c3p0.min_size}"/> <property name="minPoolSize" value="${hibernate.c3p0.min_size}"/> <property name="maxPoolSize" value="${jdbc.maxconn}"/> <property name="idleConnectionTestPeriod" value="150"/> <property name="acquireIncrement" value="1"/> <property name="maxStatements" value="0"/> <property name="numHelperThreads" value="5"/> </bean> </beans> and this is the stack trace: 2010-06-13 12:16:33,526 INFO [org.springframework.web.context.ContextLoader] - < Root WebApplicationContext: initialization started> 2010-06-13 12:16:33,707 INFO [org.springframework.web.context.support.XmlWebAppl icationContext] - <Refreshing Root WebApplicationContext: startup date [Sun Jun 13 12:16:33 GMT-05:00 2010]; root of context hierarchy> 2010-06-13 12:16:34,086 INFO [org.springframework.beans.factory.xml.XmlBeanDefin itionReader] - <Loading XML bean definitions from ServletContext resource [/WEB- INF/yummy-servlet.xml]> 2010-06-13 12:16:34,378 INFO [org.springframework.beans.factory.xml.XmlBeanDefin itionReader] - <Loading XML bean definitions from URL [jndi:/localhost/protoweb/ WEB-INF/spring/applicationContext-hibernate.xml]> 2010-06-13 12:16:34,473 INFO [org.springframework.beans.factory.xml.XmlBeanDefin itionReader] - <Loading XML bean definitions from URL [jndi:/localhost/protoweb/ WEB-INF/spring/applicationContext-dataSource.xml]> 2010-06-13 12:16:35,098 ERROR [org.springframework.web.context.ContextLoader] - <Context initialization failed> org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from relative location [spring/applicationContext-hibernate.xml] Offending resource: ServletContext resource [/WEB-INF/yummy-servlet.xml]; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: Un expected exception parsing XML document from URL [jndi:/localhost/protoweb/WEB-INF/spring/applicationContext-hibernate.xml]; nested exception is java.lang.NoSuchMethodError: org.springframework.beans.MutablePropertyValues.add(Ljava/lang/Str ing;Ljava/lang/Object;)Lorg/springframework/beans/MutablePropertyValues; at org.springframework.beans.factory.parsing.FailFastProblemReporter.err or(FailFastProblemReporter.java:68) at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderC ontext.java:85) at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderC ontext.java:76) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.importBeanDefinitionResource(DefaultBeanDefinitionDocumentReader.java:197) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.parseDefaultElement(DefaultBeanDefinitionDocumentReader.java:146) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:131) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:91) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registe rBeanDefinitions(XmlBeanDefinitionReader.java:475) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadB eanDefinitions(XmlBeanDefinitionReader.java:372) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:316) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:284) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:125) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:127) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:429) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:356) at org.springframework.web.context.ContextLoader.createWebApplicationCon text(ContextLoader.java:270) at org.springframework.web.context.ContextLoader.initWebApplicationConte xt(ContextLoader.java:197) at org.springframework.web.context.ContextLoaderListener.contextInitiali zed(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContex t.java:3972) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4 467) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase .java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:77 1) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:905) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:740 ) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:500 ) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java :321) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(Lifecycl eSupport.java:119) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:785) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443 ) at org.apache.catalina.core.StandardService.start(StandardService.java:5 19) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710 ) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: Unexp ected exception parsing XML document from URL [jndi:/localhost/protoweb/WEB-INF/ spring/applicationContext-hibernate.xml]; nested exception is java.lang.NoSuchMe thodError: org.springframework.beans.MutablePropertyValues.add(Ljava/lang/String ;Ljava/lang/Object;)Lorg/springframework/beans/MutablePropertyValues; at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadB eanDefinitions(XmlBeanDefinitionReader.java:394) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:316) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:284) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.importBeanDefinitionResource(DefaultBeanDefinitionDocumentReader.java:187) ... 42 more Caused by: java.lang.NoSuchMethodError: org.springframework.beans.MutablePropert yValues.add(Ljava/lang/String;Ljava/lang/Object;)Lorg/springframework/beans/Muta blePropertyValues; at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser.registerTransactionManager(AnnotationDrivenBeanDefinitionParser.java:95) at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser.access$0(AnnotationDrivenBeanDefinitionParser.java:94) at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser$AopAutoProxyConfigurer.configureAutoProxyCreator(AnnotationDrivenBeanDefi nitionParser.java:121) at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser.parse(AnnotationDrivenBeanDefinitionParser.java:79) at org.springframework.beans.factory.xml.NamespaceHandlerSupport.parse(N amespaceHandlerSupport.java:72) at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.pa rseCustomElement(BeanDefinitionParserDelegate.java:1327) at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.pa rseCustomElement(BeanDefinitionParserDelegate.java:1317) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:134) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:91) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registe rBeanDefinitions(XmlBeanDefinitionReader.java:475) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadB eanDefinitions(XmlBeanDefinitionReader.java:372) ... 47 more 06/13/2010 12:16:35 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart

    Read the article

  • How to compile a C++ source code written for Linux/Unix on Windows Vista (code given)

    - by HTMZ
    I have a c++ source code that was written in linux/unix environment by some other author. It gives me errors when i compile it in windows vista environment. I am using Bloodshed Dev C++ v 4.9. please help. #include <iostream.h> #include <map> #include <vector> #include <string> #include <string.h> #include <strstream> #include <unistd.h> #include <stdlib.h> using namespace std; template <class T> class PrefixSpan { private: vector < vector <T> > transaction; vector < pair <T, unsigned int> > pattern; unsigned int minsup; unsigned int minpat; unsigned int maxpat; bool all; bool where; string delimiter; bool verbose; ostream *os; void report (vector <pair <unsigned int, int> > &projected) { if (minpat > pattern.size()) return; // print where & pattern if (where) { *os << "<pattern>" << endl; // what: if (all) { *os << "<freq>" << pattern[pattern.size()-1].second << "</freq>" << endl; *os << "<what>"; for (unsigned int i = 0; i < pattern.size(); i++) *os << (i ? " " : "") << pattern[i].first; } else { *os << "<what>"; for (unsigned int i = 0; i < pattern.size(); i++) *os << (i ? " " : "") << pattern[i].first << delimiter << pattern[i].second; } *os << "</what>" << endl; // where *os << "<where>"; for (unsigned int i = 0; i < projected.size(); i++) *os << (i ? " " : "") << projected[i].first; *os << "</where>" << endl; *os << "</pattern>" << endl; } else { // print found pattern only if (all) { *os << pattern[pattern.size()-1].second; for (unsigned int i = 0; i < pattern.size(); i++) *os << " " << pattern[i].first; } else { for (unsigned int i = 0; i < pattern.size(); i++) *os << (i ? " " : "") << pattern[i].first << delimiter << pattern[i].second; } *os << endl; } } void project (vector <pair <unsigned int, int> > &projected) { if (all) report(projected); map <T, vector <pair <unsigned int, int> > > counter; for (unsigned int i = 0; i < projected.size(); i++) { int pos = projected[i].second; unsigned int id = projected[i].first; unsigned int size = transaction[id].size(); map <T, int> tmp; for (unsigned int j = pos + 1; j < size; j++) { T item = transaction[id][j]; if (tmp.find (item) == tmp.end()) tmp[item] = j ; } for (map <T, int>::iterator k = tmp.begin(); k != tmp.end(); ++k) counter[k->first].push_back (make_pair <unsigned int, int> (id, k->second)); } for (map <T, vector <pair <unsigned int, int> > >::iterator l = counter.begin (); l != counter.end (); ) { if (l->second.size() < minsup) { map <T, vector <pair <unsigned int, int> > >::iterator tmp = l; tmp = l; ++tmp; counter.erase (l); l = tmp; } else { ++l; } } if (! all && counter.size () == 0) { report (projected); return; } for (map <T, vector <pair <unsigned int, int> > >::iterator l = counter.begin (); l != counter.end(); ++l) { if (pattern.size () < maxpat) { pattern.push_back (make_pair <T, unsigned int> (l->first, l->second.size())); project (l->second); pattern.erase (pattern.end()); } } } public: PrefixSpan (unsigned int _minsup = 1, unsigned int _minpat = 1, unsigned int _maxpat = 0xffffffff, bool _all = false, bool _where = false, string _delimiter = "/", bool _verbose = false): minsup(_minsup), minpat (_minpat), maxpat (_maxpat), all(_all), where(_where), delimiter (_delimiter), verbose (_verbose) {}; ~PrefixSpan () {}; istream& read (istream &is) { string line; vector <T> tmp; T item; while (getline (is, line)) { tmp.clear (); istrstream istrs ((char *)line.c_str()); while (istrs >> item) tmp.push_back (item); transaction.push_back (tmp); } return is; } ostream& run (ostream &_os) { os = &_os; if (verbose) *os << transaction.size() << endl; vector <pair <unsigned int, int> > root; for (unsigned int i = 0; i < transaction.size(); i++) root.push_back (make_pair (i, -1)); project (root); return *os; } void clear () { transaction.clear (); pattern.clear (); } }; int main (int argc, char **argv) { extern char *optarg; unsigned int minsup = 1; unsigned int minpat = 1; unsigned int maxpat = 0xffffffff; bool all = false; bool where = false; string delimiter = "/"; bool verbose = false; string type = "string"; int opt; while ((opt = getopt(argc, argv, "awvt:M:m:L:d:")) != -1) { switch(opt) { case 'a': all = true; break; case 'w': where = true; break; case 'v': verbose = true; break; case 'm': minsup = atoi (optarg); break; case 'M': minpat = atoi (optarg); break; case 'L': maxpat = atoi (optarg); break; case 't': type = string (optarg); break; case 'd': delimiter = string (optarg); break; default: cout << "Usage: " << argv[0] << " [-m minsup] [-M minpat] [-L maxpat] [-a] [-w] [-v] [-t type] [-d delimiter] < data .." << endl; return -1; } } if (type == "int") { PrefixSpan<unsigned int> prefixspan (minsup, minpat, maxpat, all, where, delimiter, verbose); prefixspan.read (cin); prefixspan.run (cout); }else if (type == "short") { PrefixSpan<unsigned short> prefixspan (minsup, minpat, maxpat, all, where, delimiter, verbose); prefixspan.read (cin); prefixspan.run (cout); } else if (type == "char") { PrefixSpan<unsigned char> prefixspan (minsup, minpat, maxpat, all, where, delimiter, verbose); prefixspan.read (cin); prefixspan.run (cout); } else if (type == "string") { PrefixSpan<string> prefixspan (minsup, minpat, maxpat, all, where, delimiter, verbose); prefixspan.read (cin); prefixspan.run (cout); } else { cerr << "Unknown Item Type: " << type << " : choose from [string|int|short|char]" << endl; return -1; } return 0; }

    Read the article

  • Android - Querying the SMS ContentProvider?

    - by Donal Rafferty
    I currently register a content observer on the following URI "content://sms/" to listen out for incoming and outgoing messages being sent. This seems to work ok and I have also tried deleting from the sms database but I can only delete an entire thread from the following URI "content://sms/conversations/" Here is the code I use for that String url = "content://sms/"; Uri uri = Uri.parse(url); getContentResolver().registerContentObserver(uri, true, new MyContentObserver(handler)); } class MyContentObserver extends ContentObserver { public MyContentObserver(Handler handler) { super(handler); } @Override public boolean deliverSelfNotifications() { return false; } @Override public void onChange(boolean arg0) { super.onChange(arg0); Log.v("SMS", "Notification on SMS observer"); Message msg = new Message(); msg.obj = "xxxxxxxxxx"; handler.sendMessage(msg); Uri uriSMSURI = Uri.parse("content://sms/"); Cursor cur = getContentResolver().query(uriSMSURI, null, null, null, null); cur.moveToNext(); String protocol = cur.getString(cur.getColumnIndex("protocol")); if(protocol == null){ Log.d("SMS", "SMS SEND"); int threadId = cur.getInt(cur.getColumnIndex("thread_id")); Log.d("SMS", "SMS SEND ID = " + threadId); Cursor c = getContentResolver().query(Uri.parse("content://sms/outbox/" + threadId), null, null, null, null); c.moveToNext(); int p = cur.getInt(cur.getColumnIndex("person")); Log.d("SMS", "SMS SEND person= " + p); //getContentResolver().delete(Uri.parse("content://sms/outbox/" + threadId), null, null); } else{ Log.d("SMS", "SMS RECIEVE"); int threadIdIn = cur.getInt(cur.getColumnIndex("thread_id")); getContentResolver().delete(Uri.parse("content://sms/outbox/" + threadIdIn), null, null); } } } However I want to be able to get the recipricant and the message text from the SMS Content Provider, can anyone tell me how to do this? And also how to delete one message instead of an entire thread?

    Read the article

  • NHibernate, and odd "Session is Closed!" errors

    - by Sekhat
    Note: Now that I've typed this out, I have to apologize for the super long question, however, I think all the code and information presented here is in some way relevant. Okay, I'm getting odd "Session Is Closed" errors, at random points in my ASP.NET webforms application. Today, however, it's finally happening in the same place over and over again. I am near certain that nothing is disposing or closing the session in my code, as the bits of code that use are well contained away from all other code as you'll see below. I'm also using ninject as my IOC, which may / may not be important. Okay, so, First my SessionFactoryProvider and SessionProvider classes: SessionFactoryProvider public class SessionFactoryProvider : IDisposable { ISessionFactory sessionFactory; public ISessionFactory GetSessionFactory() { if (sessionFactory == null) sessionFactory = Fluently.Configure() .Database( MsSqlConfiguration.MsSql2005.ConnectionString(p => p.FromConnectionStringWithKey("QoiSqlConnection"))) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<JobMapping>()) .BuildSessionFactory(); return sessionFactory; } public void Dispose() { if (sessionFactory != null) sessionFactory.Dispose(); } } SessionProvider public class SessionProvider : IDisposable { ISessionFactory sessionFactory; ISession session; public SessionProvider(SessionFactoryProvider sessionFactoryProvider) { this.sessionFactory = sessionFactoryProvider.GetSessionFactory(); } public ISession GetCurrentSession() { if (session == null) session = sessionFactory.OpenSession(); return session; } public void Dispose() { if (session != null) { session.Dispose(); } } } These two classes are wired up with Ninject as so: NHibernateModule public class NHibernateModule : StandardModule { public override void Load() { Bind<SessionFactoryProvider>().ToSelf().Using<SingletonBehavior>(); Bind<SessionProvider>().ToSelf().Using<OnePerRequestBehavior>(); } } and as far as I can tell work as expected. Now my BaseDao<T> class: BaseDao public class BaseDao<T> : IDao<T> where T : EntityBase { private SessionProvider sessionManager; protected ISession session { get { return sessionManager.GetCurrentSession(); } } public BaseDao(SessionProvider sessionManager) { this.sessionManager = sessionManager; } public T GetBy(int id) { return session.Get<T>(id); } public void Save(T item) { using (var transaction = session.BeginTransaction()) { session.SaveOrUpdate(item); transaction.Commit(); } } public void Delete(T item) { using (var transaction = session.BeginTransaction()) { session.Delete(item); transaction.Commit(); } } public IList<T> GetAll() { return session.CreateCriteria<T>().List<T>(); } public IQueryable<T> Query() { return session.Linq<T>(); } } Which is bound in Ninject like so: DaoModule public class DaoModule : StandardModule { public override void Load() { Bind(typeof(IDao<>)).To(typeof(BaseDao<>)) .Using<OnePerRequestBehavior>(); } } Now the web request that is causing this is when I'm saving an object, it didn't occur till I made some model changes today, however the changes to my model has not changed the data access code in anyway. Though it changed a few NHibernate mappings (I can post these too if anyone is interested) From as far as I can tell, BaseDao<SomeClass>.Get is called then BaseDao<SomeOtherClass>.Get is called then BaseDao<TypeImTryingToSave>.Save is called. it's the third call at the line in Save() using (var transaction = session.BeginTransaction()) that fails with "Session is Closed!" or rather the exception: Session is closed! Object name: 'ISession'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ObjectDisposedException: Session is closed! Object name: 'ISession'. And indeed following through on the Debugger shows the third time the session is requested from the SessionProvider it is indeed closed and not connected. I have verified that Dispose on my SessionFactoryProvider and on my SessionProvider are called at the end of the request and not before the Save call is made on my Dao. So now I'm a little stuck. A few things pop to mind. Am I doing anything obviously wrong? Does NHibernate ever close sessions without me asking to? Any workarounds or ideas on what I might do? Thanks in advance

    Read the article

  • CodePlex Daily Summary for Monday, March 01, 2010

    CodePlex Daily Summary for Monday, March 01, 2010New ProjectsActiveWorlds World Server Admin PowerShell SnapIn: The purpose of this PowerShell SnapIn is to provide a set of tools to administer the world server from PowerShell. It leverages the ActiveWorlds S...AWS SimpleDB Browser: A basic GUI browser tool for inspection and querying of a SimpleDB.Desktop Dimmer: A simple application for dimming the desktop around windows, videos, or other media.Disk Defuzzer: Compare chaos of files and folders with customizable SQL queries. This little application scans files in any two folders, generates data in an A...Dynamic Configuration: Dynamic configuration is a (very) small library to provide an API compatible replacement for the System.Configuration.ConfigurationManager class so...Expression Encoder 3 Visual Basic Samples: Visual Basic Sample code that calls the Expression Encoder 3 object model.Extended Character Keyboard: An lightweight onscreen keyboard that allows you to enter special characters like "á" and "û". Also supports adding of 7 custom buttons.FileHasher: This project provides a simple tool for generating and verifying file hashes. I created this to help the QA team I work with. The project is all C#...Fluent Assertions: Fluent interface for writing more natural specifying assertions with more clarity than the traditional assertion syntax such as offered by MSTest, ...Foursquare BlogEngine Widget: A Basic Widget for BlogEngine which displays the last foursquare Check-insGraffiti CMS Events Plugin: Plugin for Graffiti CMS that allows creating Event posts and rendering an Event CalendarHeadCounter: HeadCounter is a raid attendance and loot tracking application for World of Warcraft.HRM Core (QL Nhan Su): This is software about Human Resource Management in Viet Nam ------------ Đây là phần mềm Quản lý nhân sự tiền lương ở Việt Nam (Nghiệp vụ ở Việt Nam)IronPython Silverlight Sharpdevelop Template: This IronPython Silverlight SharpDevelop Template makes it easier for you to make Silverlight applications in IronPython with Sharpdevelop.kingbox: my test code for study vs 2005link_attraente: Projeto Conclusão de CursoORMSharp.Net: ORMSharp.Net https://code.google.com/p/ormsharp/ http://www.sqlite.org/ http://sqlite.phxsoftware.com/ http://sourceforge.net/projects/sqlite-dotnet2/Orz framework: Orz framework is more like a helpful library, if you are develop with DotNet framework 3.0, it will be very useful to you. Orz framework encapsul...OTManager: OTManagerSharePoint URL Ping Tool: The Url Ping Tool is a farm feature in SharePoint that provide additional performance and tracing information that can be used to troubleshoot issu...SunShine: SunShine ProjectToolSuite.ValidationExpression: assembly with regular expression for the RegularExpressionValidator controlTwitual Studio: A Visual Studio 2010 based Twitter client. Now you have one less reason for pressing Alt+Tab. Plus you still look like you're working!Velocity Hosting Tool: A program designed to aid a HT Velocity host in hosting and recording tournaments.Watermarker: Adds watermark on pictures to prevent copy. Icon taken from PICOL. Can work with packs of images.Zack's Fiasco - ASP.NET Script Includer: Script includer to * include scripts (JS or CSS) once and only once. * include the correct format by differentiating between release and build. Th...New ReleasesAll-In-One Code Framework: All-In-One Code Framework 2010-02-28: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Samples for ASP.NET Name ...All-In-One Code Framework (简体中文): All-In-One Code Framework 2010-02-28: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Latest Download Link: http://c...AWS SimpleDB Browser: SimpleDbBrowser.zip Initial Release: The initial release of the SimpleDbBrowser. Unzip the file in the archive and place them all in a folder, then run the .exe. No installer is used...BattLineSvc: V1: First release of BattLineSvcCC.Votd Screen Saver: CC.Votd 1.0.10.301: More bug fixes and minor enhancements. Note: Only download the (Screen Saver) version if you plan to manually install. For most users the (Install...Dynamic Configuration: DynamicConfiguration Release 1: Dynamic Configuration DLL release.eIDPT - Cartão de Cidadão .NET Wrapper: EIDPT VB6 Demo Program: Cartão de Cidadão Middleware Application installation (v1.21 or 1.22) is required for proper use of the eID Lib.eIDPT - Cartão de Cidadão .NET Wrapper: eIDPT VB6 Demo Program Source: Cartão de Cidadão Middleware Application installation (v1.21 or 1.22) is required for proper use of the eID Lib.ESPEHA: Espeha 10: 1. Help available on F1 and via context menu '?' 2. Width of categiries view is preserved througb app starts 3. Drag'nd'drop for tasks view allows ...Extended Character Keyboard: OnscreenSCK Beta V1.0: OnscreenSCK Beta Version 1.0Extended Character Keyboard: OnscreenSCK Beta V1.0 Source: OnscreenSCK Beta Version 1.0 Source CodeFileHasher: Console Version v 0.5: This release provides a very basic and minimal command-line utility for generating and validating file hashes. The supported command-line paramete...Furcadia Framework for Third Party Programs: 0.2.3 Epic Wrench: Warning: Untested on Linux.FurcadiaLib\Net\NetProxy.cs: Fixed a bug I made before update. FurcadiaFramework_Example\Demo\IDemo.cs: Ignore me. F...Graffiti CMS Events Plugin: Version 1.0: Initial Release of Events PluginHeadCounter: HeadCounter 1.2.3 'Razorgore': Added "Raider Post" feature for posting details of a particular raider. Added Default Period option to allow selection of Short, Long or Lifetime...Home Access Plus+: v3.0.0.0: Version 3.0.0.0 Release Change Log: Reconfiguration of the web.config Ability to add additional links to homepage via web.config Ability to add...Home Access Plus+: v3.0.1.0: Version 3.0.1.0 Release Change Log: Fixed problem with moving File Changes: ~/bin/chs extranet.dll ~/bin/chs extranet.pdbHome Access Plus+: v3.0.2.0: Version 3.0.2.0 Release Change Log: Fixed problem with stylesheet File Changes: ~/chs.masterHRM Core (QL Nhan Su): HRMCore_src: Source of HRMCoreIRC4N00bz: IRC4N00bz v1.0.0.2: There wasn't much updated this weekend. I updated 2 'raw' events. One is all raw messages and the other is events that arn't caught in the dll. ...IronPython Silverlight Sharpdevelop Template: Version 1 Template: Just unzip it into the Sharpdevelop python templates folder For example: C:\Program Files\SharpDevelop\3.0\AddIns\AddIns\BackendBindings\PythonBi...MDownloader: MDownloader-0.15.4.56156: Fixed handling exceptions; previous handling could lead to freezing items state; Fixed validating uploading.com links;OTManager: Activity Log: 2010.02.28 >> Thread Reopened 2010.02.28 >> Re-organized WBD Features/WMBD Features 2010.02.28 >> Project status is active againPicasa Downloader: PicasaDownloader (41175): NOTE: The previous release was accidently the same as the one before that (forgot to rebuild the installer). Changelog: Fixed workitem 10296 (Sav...PicNet Html Table Filter: Version 2.0: Testing w/ JQuery 1.3.2Program Scheduler: Program Scheduler 1.1.4: Release Note: *Bug fix : If the log window is docked and user moves the log window , main window will move too. *Added menu to log window to clear...QueryToGrid Module for DotNetNuke®: QueryToGrid Module version 01.00.00: This is the initial release of this module. Remember... This is just a proof of concept to add AJAX functionality to your DotNetNuke modules.Rainweaver Framework: February 2010 Release: Code drop including an Alpha release of the Entity System. See more information in the Documentation page.RapidWebDev - .NET Enterprise Software Development Infrastructure: ProductManagement Quick Sample 0.1: This is a sample product management application to demonstrate how to develop enterprise software in RapidWebDev. The glossary of the system are ro...Team Foundation Server Revision Labeller for CruiseControl.NET: TFS Labeller for CruiseControl.NET - TFS 2008: ReleaseFirst release of the Team Foundation Server Labeller for CruiseControl.NET. This specific version is bound to TFS 2008 DLLs.ToolSuite.ValidationExpression: 01.00.01.000: first release of the time validation class; the assembly file is ready to use, the documentation ist not complete;VCC: Latest build, v2.1.30228.0: Automatic drop of latest buildWatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.7.00: Whats New FileBrowser: Non Admin Users will only see a User Sub folder (..\Portals\0\userfiles\UserName) CKFinder: Non Admin Users will only see ...Watermarker: Watermarker: first public version. can build watermark only in left top corner on one image at once.While You Were Away - WPF Screensaver: Initial Release: This is the code released when the article went live.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETDotNetNuke® Community EditionMost Active ProjectsRawrBlogEngine.NETMapWindow GISCommon Context Adapterspatterns & practices – Enterprise LibrarySharpMap - Geospatial Application Framework for the CLRSLARToolkit - Silverlight Augmented Reality ToolkitDiffPlex - a .NET Diff GeneratorRapid Entity Framework. (ORM). CTP 2jQuery Library for SharePoint Web Services

    Read the article

  • New HandleProcessCorruptedStateExceptions attribute in .Net 4

    - by Yaakov Davis
    I'm trying to crash my WPF application, and capture the exception using the above new .Net 4 attribute. I managed to manually crash my application by calling Environment.FailFast("crash");. (Also managed to crash it using Hans's code from here: http://stackoverflow.com/questions/2950130/how-to-simulate-a-corrupt-state-exception-in-net-4). The app calls the above crashing code when pressing on a button. Here are my exception handlers: protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e); AppDomain.CurrentDomain.FirstChanceException += CurrentDomain_FirstChanceException; AppDomain.CurrentDomain.UnhandledException += CurrentDomain_UnhandledException; DispatcherUnhandledException += app_DispatcherUnhandledException; } [HandleProcessCorruptedStateExceptions] void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { //log.. } [HandleProcessCorruptedStateExceptions] void CurrentDomain_FirstChanceException(object sender, System.Runtime.ExceptionServices.FirstChanceExceptionEventArgs e) { //log.. } [HandleProcessCorruptedStateExceptions] void app_DispatcherUnhandledException(object sender, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs e) { //log.. } The //log... comment shown above is just for illustration; there's real logging code there. When running in VS, an exception is thrown, but it doesn't 'bubble' up to those exception handler blocks. When running as standalone (w/o debugger attached), I don't get any log, despite what I expect. Does anyone has an idea why is it so, and how to make the handling code to be executed? Many thanks, Yaakov

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >