Search Results

Search found 10850 results on 434 pages for 'shihab returns'.

Page 105/434 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Logkeys fragile?

    - by Ahmed Nematallah
    The program logkeys (which seems to be the only keylogger for linux out there which I can run) has some problems, it stops logging after some time, never returns again, I don't know how to trigger that bug, if the file is edited while logging, it just stops, if the file exists before logging it doesn't try to append to it or delete it or anything, the first issue is the most important but the rest are quite annoying can anyone help me because I'm not a linux programmer (I don't really know anything about the linux API but I am a beginner C++ programmer) and I won't be able to make my own keylogger thanks for the interest BTW I'm sure I got the right input device because it starts logging then stops and I use the command "logkeys -s -u -d /dev/input/event3 -o '/home/ahmed/Documents/log.txt'"

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • Trying to install postgresql:i386 on 12.04 amd64

    - by tim jackson
    Due to some legacy 32 bit libraries being used in postgresql functions I need to get a 32 bit install of Postgresql on a 64 bit native system. But it seems like there is a problem with the multiarch not seeing all.debs as satisfying dependencies. uname -a: 3.8.0-29-generic #42-precise-Ubuntu SMP x86_64 dpkg --print-architecture: amd64 dpkg --print-foreign-architecture: i386 apt-get install postgresql-9.1: returns postgresql : Depends: postgresql-9.1 but it is nto going to be installed postgresql-9.1:i386 : Depends: postgresql-common:i386 but it is not installable Depends: ssl-cert:i386 but it is not installable Depends: locales:i386 but it is not installable etc .. But I have installed ssl-cert_1.0.28ubuntu0.1_all.deb and locales_..._all.deb andpostgresql-common is an all.deb Does anyone have experience installing 32 bit packages on 64 bit systems that depend on packages that are all.debs. Or has anyone installed 32 bit postgres on 64 bit? Any help appreciated.

    Read the article

  • How To Add MP3 Support to Audacity (to Save in MP3 Format)

    - by YatriTrivedi
    You may have noticed that the default installation of Audacity doesn’t have built-in support for MP3s due to licensing issues.  Here’s how to add it in yourself for free really easily in few simple steps. Photo by bobcat rock Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Five Sleek Audi R8 Car Themes for Chrome and Iron MS Notepad Replacement Metapad Returns with a New Beta Version Spybot Search and Destroy Now Available as a Portable App (PortableApps.com) ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing

    Read the article

  • Write your Tests in RSpec with IronRuby

    - by kazimanzurrashid
    [Note: This is not a continuation of my previous post, treat it as an experiment out in the wild. ] Lets consider the following class, a fictitious Fund Transfer Service: public class FundTransferService : IFundTransferService { private readonly ICurrencyConvertionService currencyConvertionService; public FundTransferService(ICurrencyConvertionService currencyConvertionService) { this.currencyConvertionService = currencyConvertionService; } public void Transfer(Account fromAccount, Account toAccount, decimal amount) { decimal convertionRate = currencyConvertionService.GetConvertionRate(fromAccount.Currency, toAccount.Currency); decimal convertedAmount = convertionRate * amount; fromAccount.Withdraw(amount); toAccount.Deposit(convertedAmount); } } public class Account { public Account(string currency, decimal balance) { Currency = currency; Balance = balance; } public string Currency { get; private set; } public decimal Balance { get; private set; } public void Deposit(decimal amount) { Balance += amount; } public void Withdraw(decimal amount) { Balance -= amount; } } We can write the spec with MSpec + Moq like the following: public class When_fund_is_transferred { const decimal ConvertionRate = 1.029m; const decimal TransferAmount = 10.0m; const decimal InitialBalance = 100.0m; static Account fromAccount; static Account toAccount; static FundTransferService fundTransferService; Establish context = () => { fromAccount = new Account("USD", InitialBalance); toAccount = new Account("CAD", InitialBalance); var currencyConvertionService = new Moq.Mock<ICurrencyConvertionService>(); currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); fundTransferService = new FundTransferService(currencyConvertionService.Object); }; Because of = () => { fundTransferService.Transfer(fromAccount, toAccount, TransferAmount); }; It should_decrease_from_account_balance = () => { fromAccount.Balance.ShouldBeLessThan(InitialBalance); }; It should_increase_to_account_balance = () => { toAccount.Balance.ShouldBeGreaterThan(InitialBalance); }; } and if you run the spec it will give you a nice little output like the following: When fund is transferred » should decrease from account balance » should increase to account balance 2 passed, 0 failed, 0 skipped, took 1.14 seconds (MSpec). Now, lets see how we can write exact spec in RSpec. require File.dirname(__FILE__) + "/../FundTransfer/bin/Debug/FundTransfer" require "spec" require "caricature" describe "When fund is transferred" do Convertion_Rate = 1.029 Transfer_Amount = 10.0 Initial_Balance = 100.0 before(:all) do @from_account = FundTransfer::Account.new("USD", Initial_Balance) @to_account = FundTransfer::Account.new("CAD", Initial_Balance) currency_convertion_service = Caricature::Isolation.for(FundTransfer::ICurrencyConvertionService) currency_convertion_service.when_receiving(:get_convertion_rate).with(:any, :any).return(Convertion_Rate) fund_transfer_service = FundTransfer::FundTransferService.new(currency_convertion_service) fund_transfer_service.transfer(@from_account, @to_account, Transfer_Amount) end it "should decrease from account balance" do @from_account.balance.should be < Initial_Balance end it "should increase to account balance" do @to_account.balance.should be > Initial_Balance end end I think the above code is self explanatory, treat the require(line 1- 4) statements as the add reference of our visual studio projects, we are adding all the required libraries with this statement. Next, the describe which is a RSpec keyword. The before does exactly the same as NUnit's Setup or MsTest’s TestInitialize attribute, but in the above we are using before(:all) which acts as ClassInitialize of MsTest, that means it will be executed only once before all the test methods. In the before(:all) we are first instantiating the from and to accounts, it is same as creating with the full name (including namespace)  like fromAccount = new FundTransfer.Account(.., ..), next, we are creating a mock object of ICurrencyConvertionService, check that for creating the mock we are not using the Moq like the MSpec version. This is somewhat an interesting issue of IronRuby or maybe the DLR, it seems that it is not possible to use the lambda expression that most of the mocking tools uses in arrange phase in Iron Ruby, like: currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); But the good news is, there is already an excellent mocking tool called Caricature written completely in IronRuby which we can use to mock the .NET classes. May be all the mocking tool providers should give some thought to add the support for the DLR, so that we can use the tool that we are already familiar with. I think the rest of the code is too simple, so I am skipping the explanation. Now, the last thing, how we are going to run it with RSpec, lets first install the required gems. Open you command prompt and type the following: igem sources -a http://gems.github.com This will add the GitHub as gem source. Next type: igem install uuidtools caricature rspec and at last we have to create a batch file so that we can execute it in the Notepad++, create a batch like in the IronRuby bin directory like my previous post and put the following in that batch file: @echo off cls call spec %1 --format specdoc pause Next, add a run menu and shortcut in the Notepad++ like my previous post. Now when we run it it will show the following output: When fund is transferred - should decrease from account balance - should increase to account balance Finished in 0.332042 seconds 2 examples, 0 failures Press any key to continue . . . You will complete code of this post in the bottom. That's it for today. Download: RSpecIntegration.zip

    Read the article

  • Monitoring almost anything with BizTalk 360

    - by Michael Stephenson
    When you work in an integration environment it is common that you will find yourself in a situation where you integrate with some unusual applications or have some unusual dependencies. That is the nature of integration. When you work with BizTalk one of the common problems is that BizTalk often is the place where problems with applications you integrate with are highlighted and these external applications may have poor monitoring solutions. Fortunately if you are a working with a customer who uses BizTalk 360 then it contains a feature called the "Web Endpoint Manager". Typically the web endpoint manager is used to monitor web services that you integrate with and will ping them at appropriate times to make sure they return the expected HTTP status code. When you have an usual situation where you want to monitor something which is key to the success to your solution but you find yourself having to consider a significant custom solution to monitor the external dependency then the Web Endpoint Manager could be your friend. The endpoint manager monitors a url and checks for a certain status code. This means that you can create your own aspx web page and then make BizTalk 360 monitor this web page. Behind the web page you could write any code you wished. An example of this is architecture is shown in the below diagram.     In the custom web page you would implement some custom code to do whatever it is that you want to monitor. In the below code snippet you can see how the Page_Load default method is doing some kind of check then depending on the result of the check it returns a certain HTTP code. protected void Page_Load(object sender, EventArgs e) { var result = CheckSomething();   if (result == "Success") Response.StatusCode = 202; else if (result == "DatabaseError") Response.StatusCode = 510; else if (result == "SystemError") Response.StatusCode = 512; else Response.StatusCode = 513;   }   In BizTalk 360 you would go into the Monitor and Notify tab and then to BizTalk Environment which gives you access to the Web Endpoint Manager. You need an alarm setup which configures how the endpoint will be checked. I'm not going to go through the details of creating the alarm as this is already documented in the BizTalk 360 documentation. One point to note is that in the example I am using I setup a threshold alarm which means that the url is checked about every minute and if there is an error that persists for a period of time then the alarm will raise the alert notification. In my example I configured the alarm to fire if the error persisted for 3 minutes. The below picture shows accessing the endpoint manager.   In the web endpoint manager you would then configure your endpoint to monitor and the HTTP response code which indicates all is working fine. The below picture shows this. I now have my endpoint monitoring setup and BizTalk 360 should be checking my custom endpoint to see that it is available. If I wanted to manually sanity check that the endpoints I have registered are working fine then clicking the Refresh button will show if they are all good or not. If my custom ASP.net page which is checking my dependency gets a problem you will see in the endpoint manager that the status code does not match the expected return code and your endpoints will display in red and you can see the problem. The below picture shows this. If I use specific HTTP response codes for the errors the custom ASP.net page might encounter I can easily interpret these to know what the problem is. Using the alarms and notifications with BizTalk 360 it means when your endpoint goes into an error state you can easily configure email or SMS notifications from BizTalk 360 to tell you that your endpoint is having problems and you can use BizTalk 360 to help correlate what the problem is to allow you to investigate further. Below you can see the email which tells me my endpoint is not working.   When everything returns to normal you will see the status is now fixed and you will see a situation like below where you can see the WebEndpoints are now green and the return code matches what is expected.   Conclusion As you can see it is really easy to plug your own custom ASP.net page into the BizTalk 360 web endpoint monitoring feature. This extension then gives you the power to really extend the monitoring to almost anything you want as long as you can write some .net code to check that the dependency is available and working. It would be interesting to hear of any ideas people have around things they would monitor with this extension. More details on the end point monitor can be found on the following link: http://www.biztalk360.com/tour/monitoring_notifications

    Read the article

  • Java doesn't show up in firefox plugins

    - by user857990
    I've just installed the newest java, because firefox blocks the old version. I used the tutorial from http://www.backtrack-linux.org/wiki/index.php/Java_Install Because I had some trouble once, I knew that there are multiple library folders, so I linked into all mozilla plugin folders that there are. /root/.mozilla/plugins /usr/lib/firefox/plugins/ /usr/lib/firefox-addons /usr/lib/mozilla/plugins /usr/lib64/mozilla/plugins java -version returns java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) But when I go to firefox plugins, it's not listed. What do I need to do, so that firefox recognizes java?

    Read the article

  • Display problem with fresh install of 12.04

    - by Dan
    Just intalled Ubuntu 12.04 from CD and install went with no problems. After rebooting, I get the initial purple screen and then a black screen with mouse pointer and a few stray pixels at the bottom left of screen. Occasionally during the boot process, the purple screen comes back momentarily but then back to the black screen with the mouse pointer. When I finally give up and press the power button, the purple screen returns with the Shut Down box visible as it is shutting down. Any ideas? I have tried adding nomodeset after quiet splash, but no change. Possibly not doing it correctly, since I am somewhat of a newbie to linux. Thanks! Dan

    Read the article

  • Simple Preferred time control using silverlight 3.

    - by mohanbrij
    Here I am going to show you a simple preferred time control, where you can select the day of the week and the time of the day. This can be used in lots of place where you may need to display the users preferred times. Sample screenshot is attached below. This control is developed using Silverlight 3 and VS2008, I am also attaching the source code with this post. This is a very basic example. You can download and customize if further for your requirement if you want. I am trying to explain in few words how this control works and what are the different ways in which you can customize it further. File: PreferredTimeControl.xaml, in this file I have just hardcoded the controls and their positions which you can see in the screenshot above. In this example, to change the start day of the week and time, you will have to go and change the design in XAML file, its not controlled by your properties or implementation classes. You can also customize it to change the start day of the week, Language, Display format, styles, etc, etc. File: PreferredTimeControl.xaml.cs, In this control using the code below, first I am taking all the checkbox from my form and store it in the Global Variable, which I can use across my page. List<CheckBox> checkBoxList; #region Constructor public PreferredTimeControl() { InitializeComponent(); GetCheckboxes();//Keep all the checkbox in List in the Load itself } #endregion #region Helper Methods private List<CheckBox> GetCheckboxes() { //Get all the CheckBoxes in the Form checkBoxList = new List<CheckBox>(); foreach (UIElement element in LayoutRoot.Children) { if (element.GetType().ToString() == "System.Windows.Controls.CheckBox") { checkBoxList.Add(element as CheckBox); } } return checkBoxList; } Then I am exposing the two methods which you can use in the container form to get and set the values in this controls. /// <summary> /// Set the Availability on the Form, with the Provided Timings /// </summary> /// <param name="selectedTimings">Provided timings comes from the DB in the form 11,12,13....37 /// Where 11 refers to Monday Morning, 12 Tuesday Morning, etc /// Here 1, 2, 3 is for Morning, Afternoon and Evening respectively, and for weekdays /// 1,2,3,4,5,6,7 where 1 is for Monday, Tuesday, Wednesday, Thrusday, Friday, Saturday and Sunday respectively /// So if we want Monday Morning, we can can denote it as 11, similarly for Saturday Evening we can write 36, etc /// </param> public void SetAvailibility(string selectedTimings) { foreach (CheckBox chk in checkBoxList) { chk.IsChecked = false; } if (!String.IsNullOrEmpty(selectedTimings)) { string[] selectedString = selectedTimings.Split(','); foreach (string selected in selectedString) { foreach (CheckBox chk in checkBoxList) { if (chk.Tag.ToString() == selected) { chk.IsChecked = true; } } } } } /// <summary> /// Gets the Availibility from the selected checkboxes /// </summary> /// <returns>String in the format of 11,12,13...41,42...31,32...37</returns> public string GetAvailibility() { string selectedText = string.Empty; foreach (CheckBox chk in GetCheckboxes()) { if (chk.IsChecked == true) { selectedText = chk.Tag.ToString() + "," + selectedText; } } return selectedText; }   In my example I am using the matrix format for Day and Time, for example Monday=1, Tuesday=2, Wednesday=3, Thursday = 4, Friday = 5, Saturday = 6, Sunday=7. And Morning = 1, Afternoon =2, Evening = 3. So if I want to represent Morning-Monday I will have to represent it as 11, Afternoon-Tuesday as 22, Morning-Wednesday as 13, etc. And in the other way to set the values in the control I am passing the values in the control in the same format as preferredTimeControl.SetAvailibility("11,12,13,16,23,22"); So this will set the checkbox value for Morning-Monday, Morning-Tuesday, Morning-Wednesday, Morning-Saturday, Afternoon of Tuesday and Afternoon of Wednesday. To implement this control, first I have to import this control in xmlns namespace as xmlns:controls="clr-namespace:PreferredTimeControlApp" and finally put in your page wherever you want, <Grid x:Name="LayoutRoot" Style="{StaticResource LayoutRootGridStyle}"> <Border x:Name="ContentBorder" Style="{StaticResource ContentBorderStyle}"> <controls:PreferredTimeControl x:Name="preferredTimeControl"></controls:PreferredTimeControl> </Border> </Grid> And in the code behind you can just include this code: private void InitializeControl() { preferredTimeControl.SetAvailibility("11,12,13,16,23,22"); } And you are ready to go. For more details you can refer to my code attached. I know there can be even simpler and better way to do this. Let me know if any other ideas. Sorry, Guys Still I have used Silverlight 3 and VS2008, as from the system I am uploading this is still not upgraded, but still you can use the same code with Silverlight 4 and VS2010 without any changes. May be just it will ask you to upgrade your project which will take care of rest. Download Source Code.   Thanks ~Brij

    Read the article

  • google-chrome video application association

    - by Ben Lee
    Is there any way to tell google-chrome to launch video files of particular types in an external application (or even just to bring up the download box as if the type was un-handled), instead of showing the video inside the browser? Searching online, it seems that chrome is supposed to use xdg-mime for file associations, but apparently is ignoring this for video. For example, when I do: xdg-mime query default video/mpeg It returns dragonplayer.desktop. But when I click on a mpeg video link, chrome displays it internally instead of launching Dragon Player (if I double click on a mpeg file in my file manager, on the other hand, it does open Dragon Player). So is there a way to tell chrome to respect this setting, or another way to coax chrome into opening the file externally? If it matters, I'm running the latest version of google-chrome stable (not chromium) at the time of writing, v. 18.0.1025.151, on kubuntu 11.10.

    Read the article

  • Ray-plane intersection to find the Z of the intersecting point

    - by Jenko
    I have a rectangle in 3d space (p1, p2, p3, p4) and when the mouse rolls over it I need to calculate the exact Z of the point on the rect, given the mouse coordinates (x, y). Would a Ray-plane intersection find the Z of the intersecting point? Edit: Would this one-liner work? .. it returns the t value for the intersection, apparently the Z value. float rayPlane(vector3D planepoint, vector3D normal, vector3D origin, vector3D direction){ return -((origin-planepoint).dot(normal))/(direction.dot(normal)); }

    Read the article

  • Find odd and even rows using $.inArray() function when using jQuery Templates

    - by hajan
    In the past period I made series of blogs on ‘jQuery Templates in ASP.NET’ topic. In one of these blogs dealing with jQuery Templates supported tags, I’ve got a question how to create alternating row background. When rendering the template, there is no direct access to the item index. One way is if there is an incremental index in the JSON string, we can use it to solve this. If there is not, then one of the ways to do this is by using the jQuery’s $.inArray() function. - $.inArray(value, array) – similar to JavaScript indexOf() Here is an complete example how to use this in context of jQuery Templates: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server">     <style type="text/css">         #myList { cursor:pointer; }                  .speakerOdd { background-color:Gray; color:White;}         .speaker { background-color:#443344; color:White;}                  .speaker:hover { background-color:White; color:Black;}         .speakerOdd:hover { background-color:White; color:Black;}     </style>     <title>jQuery ASP.NET</title>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.min.js" type="text/javascript"></script>     <script language="javascript" type="text/javascript">         var speakers = [             { Name: "Hajan1" },             { Name: "Hajan2" },             { Name: "Hajan3" },             { Name: "Hajan4" },             { Name: "Hajan5" }         ];         $(function () {             $("#myTemplate").tmpl(speakers).appendTo("#myList");         });         function oddOrEven() {             return ($.inArray(this.data, speakers) % 2) ? "speaker" : "speakerOdd";         }     </script>     <script id="myTemplate" type="text/x-jquery-tmpl">         <tr class="${oddOrEven()}">             <td> ${Name}</td>         </tr>     </script> </head> <body>     <table id="myList"></table> </body> </html> So, I have defined stylesheet classes speakerOdd and speaker as well as corresponding :hover styles. Then, you have speakers JSON string containing five items. And what is most important in our case is the oddOrEven function where $.inArray(value, data) is implemented. function oddOrEven() {     return ($.inArray(this.data, speakers) % 2) ? "speaker" : "speakerOdd"; } Remark: The $.inArray() method is similar to JavaScript's native .indexOf() method in that it returns -1 when it doesn't find a match. If the first element within the array matches value, $.inArray() returns 0. From http://api.jquery.com/jQuery.inArray/ So, now we can call oddOrEven function from inside our jQuery Template in the following way: <script id="myTemplate" type="text/x-jquery-tmpl">     <tr class="${oddOrEven()}">         <td> ${Name}</td>     </tr> </script> And the result is I hope you like it. Regards, Hajan

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How do I ask google not to index certain parts of my page?

    - by Gavin Mannion
    I was searching for an old review on my site today and I noticed that Google is indexing the headline text in my latest article list on every page that it appears, obviously I guess. The problem is if I search for my Dragon's Lair review specifically to my site like this http://www.google.co.za/search?sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8&q=site%3Alazygamer.net+dragons+lair+review Then it returns a ton of pages that aren't appropriate as they aren't related to the review at all. The reason why I care is that I have a second Dragon's Lair review that was posted years ago and now I can't find it. Is there a way to hint to google that certain text isn't relevant to the actual content on the page? is it a terrible idea?

    Read the article

  • Yet another blog about IValueConverter

    - by codingbloke
    After my previous blog on a Generic Boolean Value Converter I thought I might as well blog up another IValueConverter implementation that I use. The Generic Boolean Value Converter effectively converters an input which only has two possible values to one of two corresponding objects.  The next logical step would be to create a similar converter that can take an input which has multiple (but finite and discrete) values to one of multiple corresponding objects.  To put it more simply a Generic Enum Value Converter. Now we already have a tool that can help us in this area, the ResourceDictionary.  A simple IValueConverter implementation around it would create a StringToObjectConverter like so:- StringToObjectConverter using System; using System.Windows; using System.Windows.Data; using System.Linq; using System.Windows.Markup; namespace SilverlightApplication1 {     [ContentProperty("Items")]     public class StringToObjectConverter : IValueConverter     {         public ResourceDictionary Items { get; set; }         public string DefaultKey { get; set; }                  public StringToObjectConverter()         {             DefaultKey = "__default__";         }         public virtual object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             if (value != null && Items.Contains(value.ToString()))                 return Items[value.ToString()];             else                 return Items[DefaultKey];         }         public virtual object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             return Items.FirstOrDefault(kvp => value.Equals(kvp.Value)).Key;         }     } } There are some things to note here.  The bulk of managing the relationship between an object instance and the related string key is handled by the Items property being an ResourceDictionary.  Also there is a catch all “__default__” key value which allows for only a subset of the possible input values to mapped to an object with the rest falling through to the default. We can then set one of these up in Xaml:-             <local:StringToObjectConverter x:Key="StatusToBrush">                 <ResourceDictionary>                     <SolidColorBrush Color="Red" x:Key="Overdue" />                     <SolidColorBrush Color="Orange" x:Key="Urgent" />                     <SolidColorBrush Color="Silver" x:Key="__default__" />                 </ResourceDictionary>             </local:StringToObjectConverter> You could well imagine that in the model being bound these key names would actually be members of an enum.  This still works due to the use of ToString in the Convert method.  Hence the only requirement for the incoming object is that it has a ToString implementation which generates a sensible string instead of simply the type name. I can’t imagine right now a scenario where this converter would be used in a TwoWay binding but there is no reason why it can’t.  I prefer to avoid leaving the ConvertBack throwing an exception if that can be be avoided.  Hence it just enumerates the KeyValuePair entries to find a value that matches and returns the key its mapped to. Ah but now my sense of balance is assaulted again.  Whilst StringToObjectConverter is quite happy to accept an enum type via the Convert method it returns a string from the ConvertBack method not the original input enum type that arrived in the Convert.  Now I could address this by complicating the ConvertBack method and examining the targetType parameter etc.  However I prefer to a different approach, deriving a new EnumToObjectConverter class instead. EnumToObjectConverter using System; namespace SilverlightApplication1 {     public class EnumToObjectConverter : StringToObjectConverter     {         public override object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             string key = Enum.GetName(value.GetType(), value);             return base.Convert(key, targetType, parameter, culture);         }         public override object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             string key = (string)base.ConvertBack(value, typeof(String), parameter, culture);             return Enum.Parse(targetType, key, false);         }     } }   This is a more belts and braces solution with specific use of Enum.GetName and Enum.Parse.  Whilst its more explicit in that the a developer has to  choose to use it, it is only really necessary when using TwoWay binding, in OneWay binding the base StringToObjectConverter would serve just as well. The observant might note that there is actually no “Generic” aspect to this solution in the end.  The use of a ResourceDictionary eliminates the need for that.

    Read the article

  • Disable default error pages/error messages in IIS

    - by Antoine
    I have this web application (ASP.Net MVC 3) that on certain conditions returns a custom JSON string with a HTTP status code for an error (403, 415, 500). It is deployed on a Win 2008 R2 server with IIS 7.5 Initially I was gettting the standard HTML pages for the error instead of the JSON data. I removed the error pages for these errors in the app settings. But now my queries which should return some JSON data return a single error line. When the server gives me 403, I have the message "You do not have permission to view this directory or page." (simple line, no HTML around it). What can I do to deactivate this and finally get what the app is returning and not what the server wants to return?

    Read the article

  • Caching large amount of ajax returned objects

    - by ofcapl
    I'm building an application which fetches large amount of items with ajax requests via other application API. It returns me 6k - 30k js objects which are used multiple times across various application views (sorting, filtering etc.). I would like to avoid querying API every time for such big list so I decided to cache this data somehow. I was thinking about various solutions: saving it to localstorage, using some caching library (e.g. locachejs), storing in js var. I'm not an expert so I would like to hear Your suggestions about each (or one of these) solution, about its pros and cons. Every help will be very appreciated.

    Read the article

  • Returning a mock object from a mock object

    - by Songo
    I'm trying to return an object when mocking a parser class. This is the test code using PHPUnit 3.7 //set up the result object that I want to be returned from the call to parse method $parserResult= new ParserResult(); $parserResult->setSegment('some string'); //set up the stub Parser object $stubParser=$this->getMock('Parser'); $stubParser->expects($this->any()) ->method('parse') ->will($this->returnValue($parserResult)); //injecting the stub to my client class $fileHeaderParser= new FileWriter($stubParser); $output=$fileParser->writeStringToFile(); Inside my writeStringToFile() method I'm using $parserResult like this: writeStringToFile(){ //Some code... $parserResult=$parser->parse(); $segment=$parserResult->getSegment();//that's why I set the segment in the test. } Should I mock ParserResult in the first place, so that the mock returns a mock? Is it good design for mocks to return mocks? Is there a better approach to do this all?!

    Read the article

  • Computer SOMETIMES recognizes when headphones are plugged in.

    - by rcrobot
    Whenever I plug my headphones into my computer's front headphone jack, I get a weird situation. Sometimes, the computer will recognize the headphones and work properly. But other times, the computer will play sound through both the headphones and my monitor's speaker. When this happens, the sound section of the system settings does not list the headphones. I can fix the issue temporarily by wiggling the headphone port, but if it gets wiggled the wrong way again, then the issue returns. My PC's case is a Rosewill Challenger. I have tried multiple headphones and the same issue is there. I suspect that this might be a hardware related issue, but if there is any way to fix it with software, that would be helpful. This is what it looks like when everything is working properly: This happens when I wiggle the headphone port. I can quickly switch between these two by doing so:

    Read the article

  • MVC Pattern, ViewModels, Location of conversion.

    - by Pino
    I've been working with ASP.Net MVC for around a year now and have created my applications in the following way. X.Web - MVC Application Contains Controller and Views X.Lib - Contains Data Access, Repositories and Services. This allows us to drop the .Lib into any application that requires it. At the moment we are using Entity Framework, the conversion from EntityO to a more specific model is done in the controller. This set-up means if a service method returns an EntityO and then the Controller will do a conversion before the data is passed to a view. I'm interested to know if I should move the conversion to the Service so that the app doesn't have Entity Objects being passed around.

    Read the article

  • How to connect to Windows Server 2008 Remote Desktop with Network Level Authentication Required

    - by Lobo
    I have an Ubuntu 11.10 and I want to connect via remote desktop to a Windows Server 2008 R2. In the properties of remote desktop connection to Windows Server 2008, is set to "safer". Specifically, the selected option is "Allow connections only from computers running Remote Desktop with Network Level Authentication." In my Ubuntu, I used Remmina to connect to Windows Server 2008. Remmina can not connect to a Windows Server 2008 with the option "Network Level Authentication" (shown in the previous paragraph). The error message I Remmina returns is as follows: "Disable the connection to the server RPD: IPWINDOWSSERVER2008" How or what program I can connect by remote desktop to a Windows Server 2008 you have selected the option "Network Level Authentication"? Thanks for the help, Greetings! PD: Excuse for my English.

    Read the article

  • Xubuntu dual monitor broke display

    - by Arman yaraee
    After I used Settings Display to enable my second monitor, my first display is not working anymore either. So I enabled both, then clicked save configuration since it was working and boom it crashed. Now when I start up, I can see the screen which I type my password but it goes black after that! When I boot into recovery mode it works fine but in Settings Display I don't see HDMI at all to disable it. Is there anyway to restore all display setting to default? (I can Ctrl+ALT+F1 and type commands but xrandr returns no results over there) EDIT I should clarify that system bios is set to laptop display

    Read the article

  • ALT+SysRq+REISUB hangs at "resetting" (without actually resetting/restarting) in Maverick

    - by Andrei
    I'm trying ALT+SysRq+REISUB to see how it would be used to restart my system safely in case of emergency. However, I find that ALT+SysRq+REISUB hangs at "resetting" (without actually resetting/restarting) in Maverick. All other SysRq combinations appear to work correctly (i.e. ALT+SysRq+REISU). cat /proc/sys/kernel/sysrq returns 0. But I'm not sure it's relevant because ALT+SysRq certainly works. What can I do to have "B" actually restart the system? Thanks!

    Read the article

  • ubuntu 12.04.3 - Reverse DNS issue - slow ping interval but normal ping value

    - by McArthor Lee
    i'm running ubuntu 12.04.3 x86 desktop in my corporation environment. I join the corp domain by Likewise open. But when I ping another pc, say hostname is pc-test, "ping pc-test" or "ping pc-test.domain.name" returns slow interval (about 5 seconds) but the ping value is below 1 ms. When I use "ping -n pc-test", everything works well. So I conclude this is about reverse DNS issue. how to fix this issue? many thanks! Edit: In my understanding, reverse DNS issue is related to DNS server or Wins server, not only an ubuntu issue, is this right? if I wanna fix this issue as much as possible on ubuntu but not on network servers, what to do?

    Read the article

  • October 2013 Cumulative Update for SQL Server 2008 R2

    - by AaronBertrand
    Microsoft has released Cumulative Update #9 for SQL Server 2008 R2 Service Pack 2. KB Article: KB #2887606 17 fixes listed at time of publication Build number is 10.50.4295 Relevant for @@VERSION 10.50.4000 through 10.50.4294 My usual disclaimer: these updates are NOT for SQL Server 2008 (or SQL Server 2012). Only apply to systems where SELECT @@VERSION returns 10.50.xxxx, where xxxx is >= 2500. If xxxx < 2500, you need to start thinking about getting off the RTM branch. Note that no more cumulative...(read more)

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >