Search Results

Search found 10991 results on 440 pages for 'stable sort'.

Page 50/440 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How can visiting a webpage infect your computer?

    - by Cybis
    My mother's computer recently became infected with some sort of rootkit. It began when she received an email from a close friend asking her to check out some sort of webpage. I never saw it, but my mother said it was just a blog of some sort, nothing interesting. A few days later, my mother signed in on the PayPal homepage. PayPal gave some sort of security notice which stated that to prevent fraud, they needed some additional personal information. Among some of the more normal information (name, address, etc.), they asked for her SSN and bank PIN! She refused to submit that information and complained to PayPal that they shouldn't ask for it. PayPal said they would never ask for such information and that it wasn't their webpage. There was no such "security notice" when she logged in from a different computer, only from hers. It wasn't a phishing attempt or redirection of some sort, IE clearly showed an SSL connection to https://www.paypal.com/ She remembered that strange email and asked her friend about it - the friend never sent it! Obviously, something on her computer was intercepting the PayPal homepage and that email was the only other strange thing to happen recently. She entrusted me to fix everything. I nuked the computer from orbit since it was the only way to be sure (i.e., reformatted her hard drive and did a clean install). That seemed to work fine. But that got me wondering... my mother didn't download and run anything. There were no weird ActiveX controls running (she's not computer illiterate and knows not to install them), and she only uses webmail (i.e., no Outlook vulnerability). When I think webpages, I think content presentation - JavaScript, HTML, and maybe some Flash. How could that possibly install and execute arbitrary software on your computer? It seems kinda weird/stupid that such vulnerabilities exist.

    Read the article

  • Downloading a repository for local use

    - by EBV2010
    I'm trying to get Thunderbird working in such a way that it will properly work with Kolab groupware. For that I need it to be in a fixed setup of Thunderbird and add-ons (Lightning, SyncKolab) without automatic updates and I need to present version of Thunderbird to be available for the users. What I hope to achieve is that the repository for Thunderbird as it is now on http://ppa.launchpad.net/mozillateam/thunderbird-stable/ will be available on my local server so I always use that version even if Thunderbird goes to a new stable version. What I hope to achieve is this: - I copy the content of http://ppa.launchpad.net/mozillateam/thunderbird-stable/ to my server - I make it available as a repository on my network I neither know if this is possible or allowed under the license etc.

    Read the article

  • In utf-8 collation, why 11- is less then 1- ?

    - by ???
    I found that the sort result in ASCII: 1- 11- and in UTF-8: 11- 1- I feel it's so counter-intuitive, and it's not dictionary order. Isn't the character '-' (002d) is always less then [0-9] (0030-0039)? What's the general rule in UTF-8 collation? And how to bypass it, just make - be less then [0-9] while keep other characters unchanged for UTF-8, in Linux? (So it can affects the result of ls --sort, sort, etc. )

    Read the article

  • web grid server pagination trigger multiple controller call when changing page

    - by Thomas Scattolin
    When I server-filter on "au" my web grid and change page, multiple call to the controller are done : the first with 0 filtering, the second with "a" filtering, the third with "au" filtering. My table load huge data so the first call is longer than others. I see the grid displaying firstly the third call result, then the second, and finally the first call (this order correspond to the response time of my controller due to filter parameter) Why are all that controller call made ? Can't just my controller be called once with my total filter "au" ? What should I do ? Here is my grid : $("#" + gridId).kendoGrid({ selectable: "row", pageable: true, filterable:true, scrollable : true, //scrollable: { // virtual: true //false // Bug : Génère un affichage multiple... //}, navigatable: true, groupable: true, sortable: { mode: "multiple", // enables multi-column sorting allowUnsort: true }, dataSource: { type: "json", serverPaging: true, serverSorting: true, serverFiltering: true, serverGrouping:false, // Ne fonctionne pas... pageSize: '@ViewBag.Pagination', transport: { read: { url: Procvalue + "/LOV", type: "POST", dataType: "json", contentType: "application/json; charset=utf-8" }, parameterMap: function (options, type) { // Mise à jour du format d'envoi des paramètres // pour qu'ils puissent être correctement interprétés côté serveur. // Construction du paramètre sort : if (options.sort != null) { var sort = options.sort; var sort2 = ""; for (i = 0; i < sort.length; i++) { sort2 = sort2 + sort[i].field + '-' + sort[i].dir + '~'; } options.sort = sort2; } if (options.group != null) { var group = options.group; var group2 = ""; for (i = 0; i < group.length; i++) { group2 = group2 + group[i].field + '-' + group[i].dir + '~'; } options.group = group2; } if (options.filter != null) { var filter = options.filter.filters; var filter2 = ""; for (i = 0; i < filter.length; i++) { // Vérification si type colonne == string. // Parcours des colonnes pour trouver celle qui a le même nom de champ. var type = ""; for (j = 0 ; j < colonnes.length ; j++) { if (colonnes[j].champ == filter[i].field) { type = colonnes[j].type; break; } } if (filter2.length == 0) { if (type == "string") { // Avec '' autour de la valeur. filter2 = filter2 + filter[i].field + '~' + filter[i].operator + "~'" + filter[i].value + "'"; } else { // Sans '' autour de la valeur. filter2 = filter2 + filter[i].field + '~' + filter[i].operator + "~" + filter[i].value; } } else { if (type == "string") { // Avec '' autour de la valeur. filter2 = filter2 + '~' + options.filter.logic + '~' + filter[i].field + '~' + filter[i].operator + "~'" + filter[i].value + "'"; }else{ filter2 = filter2 + '~' + options.filter.logic + '~' + filter[i].field + '~' + filter[i].operator + "~" + filter[i].value; } } } options.filter = filter2; } var json = JSON.stringify(options); return json; } }, schema: { data: function (data) { return eval(data.data.Data); }, total: function (data) { return eval(data.data.Total); } }, filter: { logic: "or", filters:filtre(valeur) } }, columns: getColonnes(colonnes) }); Here is my controller : [HttpPost] public ActionResult LOV([DataSourceRequest] DataSourceRequest request) { return Json(CProduitsManager.GetProduits().ToDataSourceResult(request)); }

    Read the article

  • Sublinear Extra Space MergeSort

    - by hulkmeister
    I am reviewing basic algorithms from a book called Algorithms by Robert Sedgewick, and I came across a problem in MergeSort that I am, sad to say, having difficulty solving. The problem is below: Sublinear Extra Space. Develop a merge implementation that reduces that extra space requirement to max(M, N/M), based on the following idea: Divide the array into N/M blocks of size M (for simplicity in this description, assume that N is a multiple of M). Then, (i) considering the blocks as items with their first key as the sort key, sort them using selection sort; and (ii) run through the array merging the first block with the second, then the second block with the third, and so forth. The problem I have with the problem is that based on the idea Sedgewick recommends, the following set of arrays will not be sorted: {0, 10, 12}, {3, 9, 11}, {5, 8, 13}. The algorithm I use is the following: Divide the full array into subarrays of size M. Run Selection Sort on each of the subarrays. Merge each of the subarrays using the method Sedgwick recommends in (ii). (This is where I encounter the problem of where to store the results after the merge.) This leads to wanting to increase the size of the auxiliary space needed to handle at least two subarrays at a time (for merging), but based on the specifications of the problem, that is not allowed. I have also considered using the original array as space for one subarray and using the auxiliary space for the second subarray. However, I can't envision a solution that does not end up overwriting the entries of the first subarray. Any ideas on other ways this can be done? NOTE: If this is suppose to be on StackOverflow.com, please let me know how I can move it. I posted here because the question was academic.

    Read the article

  • Problems after bumblebee installation

    - by Samuel
    I tried to install bumblebee on Ubuntu 12.04 LTS by following steps on ubuntuwiki site. But when i used this code: sudo add-apt-repository ppa:bumblebee/stable && sudo apt-get update this output came out: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.q0zzLiXVT3 --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 46C0364A882F14F899448FFCB22A95F88110A93A gpg: requesting key 8110A93A from hkp server keyserver.ubuntu.com gpg: key 8110A93A: "Launchpad PPA for Bumlebee Project" not changed gpg: Total number processed: 1 gpg: unchanged: 1 E: Type 'ain' is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list E: The list of sources could not be read. There´s also the same problem message when I try to run the update center. ´E:Type´ain´ is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list, E:The list of sources could not be read., E:The package lists or status file could not be parsed or opened.´ I don´t know what to do since I´m a newbie at Linux. Thanks in advance, Samuel.

    Read the article

  • Why is Quicksort called "Quicksort"?

    - by Darrel Hoffman
    The point of this question is not to debate the merits of this over any other sorting algorithm - certainly there are many other questions that do this. This question is about the name. Why is Quicksort called "Quicksort"? Sure, it's "quick", most of the time, but not always. The possibility of degenerating to O(N^2) is well known. There are various modifications to Quicksort that mitigate this problem, but the ones which bring the worst case down to a guaranteed O(n log n) aren't generally called Quicksort anymore. (e.g. Introsort). I just wonder why of all the well-known sorting algorithms, this is the only one deserving of the name "quick", which describes not how the algorithm works, but how fast it (usually) is. Mergesort is called that because it merges the data. Heapsort is called that because it uses a heap. Introsort gets its name from "Introspective", since it monitors its own performance to decide when to switch from Quicksort to Heapsort. Similarly for all the slower ones - Bubblesort, Insertion sort, Selection sort, etc. They're all named for how they work. The only other exception I can think of is "Bogosort", which is really just a joke that nobody ever actually uses in practice. Why isn't Quicksort called something more descriptive, like "Partition sort" or "Pivot sort", which describe what it actually does? It's not even a case of "got here first". Mergesort was developed 15 years before Quicksort. (1945 and 1960 respectively according to Wikipedia) I guess this is really more of a history question than a programming one. I'm just curious how it got the name - was it just good marketing?

    Read the article

  • parameters in a seo url

    - by Marius
    This should be a very simple question for seo experts. Let's say we have the following URLs: http://www.test.com/some-sort-of-page http://www.test.com/some-sort-of-page?pgid1189 http://www.test.com/some-sort-of-page/page/1189 http://www.test.com/page/1189/some-sort-of-page The first one is an ideal solution. What i need to do is to somehow pass a resource identifier in the url to know exactly what this url is pointing to, since it can be pointing to a lot of different things. In the second URL, "pgid" specifies that the resource is a "page". URLs 3 and 4 specify the same thing differently. I do not care if the URL is friendly to people, because, let's face it - 99.9% of people will never ever ever bother to remember such url no matter how "friendly" it is. So the question is: which of the last 3 URLs would be the best solution for search engines? My guess is it would be the 2nd with query string, but i might be wrong. Thanks for your thoughts P.S. please don't offer using the first url. There's no problem using it, but the question is not about that.

    Read the article

  • Install tmux on Mac OS X

    - by unixben
    This is a short run down on how to get tmux running on your Mac OS X system. The same methodology applies when compiling this on Solaris. What is tmux? According to the developer's page, "tmux is a terminal multiplexer: it enables a number of terminals (or windows), each running a separate program, to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached". Why not just use screen? For me, the primary reason I switched to tmux from screen is the much easier configuration syntax that tmux offers. If you've ever struggled with formatting screen's caption or hardstatus line, then you will appreciate the ease with which you can achieve the same results in tmux. Preparing your environment You will need a C compiler installed. I believe that OS X ships by default with GNU make, but if not, then you will need to obtain it or use Xcode. Download the sources While I'm putting all this together, I like to keep everything neatly tucked away in a build directory. mkdir ~/build cd ~/build curl -OL http://downloads.sourceforge.net/tmux/tmux-1.5.tar.gz curl -OL http://downloads.sourceforge.net/project/levent/libevent/libevent-2.0/libevent-2.0.16-stable.tar.gz Unpack the sources tar xzf tmux-1.5.tar.gz tar xzf libevent-2.0.16-stable.tar.gz Compiling libevent cd libevent-2.0.16-stable ./configure --prefix=/opt make sudo make install Compiling tmux cd ../tmux-1.5 LDFLAGS="-L/opt/lib" CPPFLAGS="-I/opt/include" LIBS="-lresolv" ./configure --prefix=/opt make sudo make install That's all there is to it!

    Read the article

  • Cannot connect to wireless network

    - by smr
    I am quite new to using Ubuntu 12.10. All I remember while installing Ubuntu 12.10 was that I didn't check any box specifically to enable wireless connectivity. I tried looking into the various questions posted here - most of them refer to certain commands (redundantly sudo) and things of that sort. I am quite new and wasn't sure of understanding what those commands were trying to do (the network configuration settings - showed up different options, like 'Wireless', 'Wired', 'IPv4', 'IPv6' etc -- I am not sure of what sort of settings are to be made - like for example what sort of 'WEP' settings etc are to be made. Although I wasn't able to connect to the wireless network, I was able to connect to the internet, once I plugged in the ethernet cable. Can someone help me with: 1) Understandng a way to see if Ubuntu is configured to connect to a wireless network. 2) If so - Getting to configure to a wireless network (setting up a new one). 3) Where in ubuntu, should I look for seeing if there are any hardware devices (such as the Broadcom wireless adapter or things of that sort - like the way we use the 'Device manager' in Windows to look into the adapter settings). 4) Reference material to learn and understand the various ubuntu commands. (things like the -lshw etc).

    Read the article

  • SBS 2008 SP2 Backup - Volume Shadow Copy Operation Failed

    - by Robert Ortisi
    Server Setup Exchange 2007 Version: 08.03.0192.001 (Rollup 4) Windows Small Business Server 2008 SP2 (Rollup 5) Exchange set up on D: drive (449 GB / 698 GB Free) 80 GB / 148 GB Free on OS drive. Issue Backup Failure (VSS related) Backup Software Windows Server Backup (ver 1.0) Simplified Error Creation of the shared protection point timed out. Unknown error (0x81000101) The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. What I've tried Server Reboot. Updated Server and Exchange. ReConfigured Sharepoint (Helped resolve last vss error I encountered). registered VSS Dll's (Backups will sometimes work afterwards but VSS writers fail soon after). Tried Implementing Hotfix: http://support.microsoft.com/kb/956136 Tried Implementing Hotfix: http://support.microsoft.com/kb/972135 I left it for a few days and a few backups came through but then began to fail again. Detailed Information Log Name: Application Source: VSS Date: 16/11/2011 8:02:11 PM Event ID: 12341 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Event Xml: 12341 3 0 0x80000000000000 1651049 Application SERVER.DOMAIN.local 43 \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ ================================================================================= Log Name: System Source: volsnap Date: 16/11/2011 8:02:11 PM Event ID: 8 Task Category: None Level: Error Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Event Xml: 8 2 0 0x80000000000000 987135 System SERVER.DOMAIN.local ================================================================================== Log Name: Application Source: Microsoft-Windows-Backup Date: 16/11/2011 8:11:18 PM Event ID: 521 Task Category: None Level: Error Keywords: User: SYSTEM Computer: SERVER.DOMAIN.local Description: Backup started at '16/11/2011 9:00:35 AM' failed as Volume Shadow copy operation failed for backup volumes with following error code '2155348001'. Please rerun backup once issue is resolved. Event Xml: 521 0 2 0 0 0x8000000000000000 1651065 Application SERVER.DOMAIN.local 2011-11-16T09:00:35.446Z 2155348001 %%2155348001 ================================================================================== Writer name: 'FRS Writer' Writer Id: {d76f5a28-3092-4589-ba48-2958fb88ce29} Writer Instance Id: {ba047fc6-9ce8-44ba-b59f-f2f8c07708aa} State: [5] Waiting for completion Last error: No error Writer name: 'ASR Writer' Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4} Writer Instance Id: {0aace3e2-c840-4572-bf49-7fcc3fbcf56d} State: [1] Stable Last error: No error Writer name: 'Shadow Copy Optimization Writer' Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f} Writer Instance Id: {054593e2-2086-4480-92e5-30386509ed1b} State: [1] Stable Last error: No error Writer name: 'Registry Writer' Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485} Writer Instance Id: {840e6f5f-f35a-4b65-bb20-060cf2ee892a} State: [1] Stable Last error: No error Writer name: 'COM+ REGDB Writer' Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f} Writer Instance Id: {9486bedc-f6e8-424b-b563-8b849d51b1e1} State: [1] Stable Last error: No error Writer name: 'BITS Writer' Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0} Writer Instance Id: {29368bb3-e04b-4404-8fc9-e62dae18da91} State: [1] Stable Last error: No error Writer name: 'Dhcp Jet Writer' Writer Id: {be9ac81e-3619-421f-920f-4c6fea9e93ad} Writer Instance Id: {cfb58c78-9609-4133-8fc8-f66b0d25e12d} State: [5] Waiting for completion Last error: No error ==================================================================================

    Read the article

  • unattended-upgrades does not reboot

    - by Cheiron
    I am running Debian 7 stable with unattended-upgrades (every morning at 6 AM) to make sure I am always fully updated. I have the following config: $ cat /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these origin patterns Unattended-Upgrade::Origins-Pattern { // Archive or Suite based matching: // Note that this will silently match a different release after // migration to the specified archive (e.g. testing becomes the // new stable). "o=Debian,a=stable"; "o=Debian,a=stable-updates"; // "o=Debian,a=proposed-updates"; "origin=Debian,archive=stable,label=Debian-Security"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // This option allows you to control if on a unclean dpkg exit // unattended-upgrades will automatically run // dpkg --force-confold --configure -a // The default is true, to ensure updates keep getting installed //Unattended-Upgrade::AutoFixInterruptedDpkg "false"; // Split the upgrade into the smallest possible chunks so that // they can be interrupted with SIGUSR1. This makes the upgrade // a bit slower but it has the benefit that shutdown while a upgrade // is running is possible (with a small delay) //Unattended-Upgrade::MinimalSteps "true"; // Install all unattended-upgrades when the machine is shuting down // instead of doing it in the background while the machine is running // This will (obviously) make shutdown slower //Unattended-Upgrade::InstallOnShutdown "true"; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. A package that provides // 'mailx' must be installed. E.g. "[email protected]" Unattended-Upgrade::Mail "root"; // Set this value to "true" to get emails only on errors. Default // is to always send a mail if Unattended-Upgrade::Mail is set Unattended-Upgrade::MailOnlyOnError "true"; // Do automatic removal of new unused dependencies after the upgrade // (equivalent to apt-get autoremove) //Unattended-Upgrade::Remove-Unused-Dependencies "false"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "true"; // Use apt bandwidth limit feature, this example limits the download // speed to 70kb/sec //Acquire::http::Dl-Limit "70"; As you can see Automatic-Reboot is true and thus the server should automaticly reboot. Last time I checked the server was online for over 100 days, which means that the update from Debian 7.1 to Debian 7.2 has happened while the server was up (and indeed, all updates were installed), but this involves kernel updates, which means that the server should reboot. It did not. The server was running very slow, so I rebooted which fixed that. I did some research and found out that unattended-upgrades responds to the reboot-required file in /var/run/. I touched this file and waited one week, the file still exists and the server did not reboot. So I think that unattended-uppgrades ignores the auto-reboot part. So, am I doing somthing wrong here? Why did the server not restart? The upgrade part works perfect by the way, its just the reboot part that does not seem to work as it should.

    Read the article

  • How To Initialize Object Which May Be Used In Catch Clause?

    - by Onorio Catenacci
    I've seen this sort of pattern in code before: //pseudo C# code var exInfo = null; //Line A try { var p = SomeProperty; //Line B exInfo = new ExceptionMessage("The property was " + p); //Line C } catch(Exception ex) { exInfo.SomeOtherProperty = SomeOtherValue; //Line D } Usually the code is structured in this fashion because exInfo has to be visible outside of the try clause. The problem is that if an exception occurs on Line B, then exInfo will be null at Line D. The issue arises when something happens on Line B that must occur before exInfo is constructed. But if I set exInfo to a new Object at line A then memory may get leaked at Line C (due to "new"-ing the object there). Is there a better pattern for handling this sort of code? Is there a name for this sort of initialization pattern? By the way I know I could check for exInfo == null before line D but that seems a bit clumsy and I'm looking for a better approach.

    Read the article

  • Is there a language that allows this syntax: add(elements)at(index);

    - by c_maker
    Does a language exist with such a syntax? If not, what are some of the limitations/disadvantages to this syntax in case I want to write a language that supported it? Some examples: sort(array, fromIndex, toIndex); vs sort(array)from(index1)to(index2); Method signature would like this: sort(SomeType[] arr)from(int begin)to(int end){ ... } Update: Because there might be some confusion, I'd like to clarify... I meant this question as a general idea like this (not specific to sorting and possibly using keywords like from and to): In JAVA(like language): void myfancymethod(int arg1, String arg2){ ... } myfancymethod(1, "foo"); In imaginary language: void my(int arg1)fancy(String arg2)method{ ... } my(1)fancy("foo")method;

    Read the article

  • Upgrading from 12.10 to 13.04 -> dpkg: error processing sudo (--configure)

    - by Korrigan Nagirrok
    Here's the deal and reason I'm asking for your help. Last night I went on upgrading my Xubuntu 12.10 installation to 13.04, so at tty1 I run the command sudo do-release-upgrade and everything seemed to went well except that after rebooting and when I run sudo apt-get update && sudo apt-get upgrade I get this error: sudo apt-get update && sudo apt-get upgrade Hit http://pt.archive.ubuntu.com raring Release.gpg Hit http://pt.archive.ubuntu.com raring-updates Release.gpg Hit http://dl.google.com stable Release.gpg Hit http://pt.archive.ubuntu.com raring-backports Release.gpg Hit http://pt.archive.ubuntu.com raring Release Hit http://archive.canonical.com raring Release.gpg Hit http://ppa.launchpad.net raring Release.gpg Hit http://pt.archive.ubuntu.com raring-updates Release Hit http://extras.ubuntu.com raring Release.gpg Hit http://pt.archive.ubuntu.com raring-backports Release Hit http://dl.google.com stable Release Hit http://pt.archive.ubuntu.com raring/main Sources Hit http://pt.archive.ubuntu.com raring/restricted Sources Hit http://extras.ubuntu.com raring Release Hit http://archive.canonical.com raring Release Hit http://ppa.launchpad.net raring Release.gpg Hit http://pt.archive.ubuntu.com raring/universe Sources Hit http://pt.archive.ubuntu.com raring/multiverse Sources Hit http://dl.google.com stable/main i386 Packages Get:1 http://security.ubuntu.com raring-security Release.gpg [933 B] Hit http://pt.archive.ubuntu.com raring/main i386 Packages Hit http://extras.ubuntu.com raring/main Sources Hit http://ppa.launchpad.net raring Release Hit http://archive.canonical.com raring/partner i386 Packages Hit http://pt.archive.ubuntu.com raring/restricted i386 Packages Hit http://pt.archive.ubuntu.com raring/universe i386 Packages Hit http://extras.ubuntu.com raring/main i386 Packages Hit http://pt.archive.ubuntu.com raring/multiverse i386 Packages Hit http://ppa.launchpad.net raring Release Hit http://pt.archive.ubuntu.com raring/main Translation-en Hit http://ppa.launchpad.net raring/main Sources Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://pt.archive.ubuntu.com raring/multiverse Translation-en Hit http://pt.archive.ubuntu.com raring/restricted Translation-en Hit http://pt.archive.ubuntu.com raring/universe Translation-en Hit http://pt.archive.ubuntu.com raring-updates/main Sources Hit http://pt.archive.ubuntu.com raring-updates/restricted Sources Hit http://ppa.launchpad.net raring/main Sources Hit http://pt.archive.ubuntu.com raring-updates/universe Sources Hit http://pt.archive.ubuntu.com raring-updates/multiverse Sources Hit http://pt.archive.ubuntu.com raring-updates/main i386 Packages Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://pt.archive.ubuntu.com raring-updates/restricted i386 Packages Hit http://pt.archive.ubuntu.com raring-updates/universe i386 Packages Hit http://pt.archive.ubuntu.com raring-updates/multiverse i386 Packages Ign http://dl.google.com stable/main Translation-en_US Hit http://pt.archive.ubuntu.com raring-updates/main Translation-en Ign http://archive.canonical.com raring/partner Translation-en_US Ign http://extras.ubuntu.com raring/main Translation-en_US Ign http://dl.google.com stable/main Translation-en Ign http://archive.canonical.com raring/partner Translation-en Hit http://pt.archive.ubuntu.com raring-updates/multiverse Translation-en Ign http://extras.ubuntu.com raring/main Translation-en Hit http://pt.archive.ubuntu.com raring-updates/restricted Translation-en Hit http://pt.archive.ubuntu.com raring-updates/universe Translation-en Hit http://pt.archive.ubuntu.com raring-backports/main Sources Hit http://pt.archive.ubuntu.com raring-backports/restricted Sources Hit http://pt.archive.ubuntu.com raring-backports/universe Sources Hit http://pt.archive.ubuntu.com raring-backports/multiverse Sources Hit http://pt.archive.ubuntu.com raring-backports/main i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/restricted i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/universe i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/multiverse i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/main Translation-en Hit http://pt.archive.ubuntu.com raring-backports/multiverse Translation-en Get:2 http://security.ubuntu.com raring-security Release [40.8 kB] Hit http://pt.archive.ubuntu.com raring-backports/restricted Translation-en Hit http://pt.archive.ubuntu.com raring-backports/universe Translation-en Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Get:3 http://security.ubuntu.com raring-security/main Sources [2,109 B] Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Get:4 http://security.ubuntu.com raring-security/restricted Sources [14 B] Get:5 http://security.ubuntu.com raring-security/universe Sources [14 B] Get:6 http://security.ubuntu.com raring-security/multiverse Sources [14 B] Get:7 http://security.ubuntu.com raring-security/main i386 Packages [3,670 B] Get:8 http://security.ubuntu.com raring-security/restricted i386 Packages [14 B] Get:9 http://security.ubuntu.com raring-security/universe i386 Packages [2,824 B] Get:10 http://security.ubuntu.com raring-security/multiverse i386 Packages [14 B] Ign http://pt.archive.ubuntu.com raring/main Translation-en_US Ign http://pt.archive.ubuntu.com raring/multiverse Translation-en_US Ign http://pt.archive.ubuntu.com raring/restricted Translation-en_US Ign http://pt.archive.ubuntu.com raring/universe Translation-en_US Ign http://pt.archive.ubuntu.com raring-updates/main Translation-en_US Ign http://pt.archive.ubuntu.com raring-updates/multiverse Translation-en_US Hit http://security.ubuntu.com raring-security/main Translation-en Ign http://pt.archive.ubuntu.com raring-updates/restricted Translation-en_US Ign http://pt.archive.ubuntu.com raring-updates/universe Translation-en_US Ign http://pt.archive.ubuntu.com raring-backports/main Translation-en_US Ign http://pt.archive.ubuntu.com raring-backports/multiverse Translation-en_US Ign http://pt.archive.ubuntu.com raring-backports/restricted Translation-en_US Hit http://security.ubuntu.com raring-security/multiverse Translation-en Ign http://pt.archive.ubuntu.com raring-backports/universe Translation-en_US Hit http://security.ubuntu.com raring-security/restricted Translation-en Hit http://security.ubuntu.com raring-security/universe Translation-en Ign http://security.ubuntu.com raring-security/main Translation-en_US Ign http://security.ubuntu.com raring-security/multiverse Translation-en_US Ign http://security.ubuntu.com raring-security/restricted Translation-en_US Ign http://security.ubuntu.com raring-security/universe Translation-en_US Fetched 50.4 kB in 6s (7,454 B/s) Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/373 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y dpkg: error processing sudo (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of ubuntu-minimal: ubuntu-minimal depends on sudo; however: Package sudo is not configured yet. dpkg: error processing ubuntu-minimal (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: sudo ubuntu-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried everything I thought logical, like sudo dpkg --configure -a dpkg: error processing sudo (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: dependency problems prevent configuration of ubuntu-minimal: ubuntu-minimal depends on sudo; however: Package sudo is not configured yet. dpkg: error processing ubuntu-minimal (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: sudo ubuntu-minimal sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/373 kB of archives. After this operation, 0 B of additional disk space will be used. dpkg: error processing sudo (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: dependency problems prevent configuration of ubuntu-minimal: ubuntu-minimal depends on sudo; however: Package sudo is not configured yet. dpkg: error processing ubuntu-minimal (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Errors were encountered while processing: sudo ubuntu-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) Can someone help me, please. Edit: Here's some more info that could be of help for anyone. The output of apt-cache policy linux-image-generic-pae linux-generic-pae is linux-image-generic-pae: Installed: (none) Candidate: 3.8.0.19.35 Version table: 3.8.0.19.35 0 500 http://pt.archive.ubuntu.com/ubuntu/ raring/main i386 Packages linux-generic-pae: Installed: (none) Candidate: 3.8.0.19.35 Version table: 3.8.0.19.35 0 500 http://pt.archive.ubuntu.com/ubuntu/ raring/main i386 Packages

    Read the article

  • Installing or uninstalling Kerio VPN Client - error 2738

    - by LuckyNeo
    I had problems with beta version of Kerio VPN Client (KVC) and I decided to uninstall it and install older stable version. When I tried to uninstall it, I get a message: "Error 2738. Could not access VBScript run time for custom action." When I tried to install stable version of KVC without uninstalling older one, I get the same message.

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • An observation on .NET loops – foreach, for, while, do-while

    It’s very common that .NET programmers use “foreach” loop for iterating through collections. Following is my observation whilst I was testing simple scenario on loops. “for” loop is 30% faster than “foreach” and “while” loop is 50% faster than “foreach”. “do-while” is bit faster than “while”. Someone may feel that how does it make difference if I’m iterating only 1000 times in a loop. This test case is only for simple iteration. According to the "Data structure" concepts, best and worst cases are completely based on the data we provide to the algorithm. so we can not conclude that a "foreach" algorithm is not good. All I want to tell that we need to be little cautious even choosing the loops. Example:- You might want to chose quick sort when you want to sort more numbers. At the same time bubble sort may be effective than quick sort when you want to sort less numbers. Take a simple scenario, a request of a simple web application fetches the data of 10000 (10K) rows and iterating them for some business logic. Think, this application is being accessed by 1000 (1K) people simultaneously. In this simple scenario you are ending up with 10000000 (10Million or 1 Crore) iterations. below is the test scenario with simple console application to test 100 Million records. using System;using System.Collections.Generic;using System.Diagnostics;namespace ConsoleApplication1{ class Program { static void Main(string[] args) { var sw = new Stopwatch(); var numbers = GetSomeNumbers(); sw.Start(); foreach (var item in numbers) { } sw.Stop(); Console.WriteLine( String.Format("\"foreach\" took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); for (int i = 0; i < numbers.Count; i++) { } sw.Stop(); Console.WriteLine( String.Format("\"for\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); var it = 0; while (it++ < numbers.Count) { } sw.Stop(); Console.WriteLine( String.Format("\"while\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); var it2 = 0; do { } while (it2++ < numbers.Count); sw.Stop(); Console.WriteLine( String.Format("\"do-while\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); } #region Get me 10Crore (100 Million) numbers private static List<int> GetSomeNumbers() { var lstNumbers = new List<int>(); var count = 100000000; for (var i = 1; i <= count; i++) { lstNumbers.Add(i); } return lstNumbers; } #endregion Get me some numbers }} In above example, I was just iterating through 100 Million numbers. You can see the time to execute various  loops provided in .NET Output "foreach" took 1108 milliseconds "for" loop took 727 milliseconds "while" loop took 596 milliseconds "do-while" loop took 594 milliseconds   Press any key to continue . . . So I feel we need to be careful while choosing the looping strategy. Please comment your thoughts. span.fullpost {display:none;}

    Read the article

  • Where does lucene .net cache the search results?

    - by Lanceomagnifico
    Hi, I'm trying to figure out where Lucene stores the cached query results, and how it's configured to do so - and how long it caches for. This is for an ASP.NET 3.5 solution. I'm getting this problem: If I run a search and sort the result by a particular product field, it seems to work the very first time each search and sort combination is used. If I then go in and change some product attributes, reindex and run the same search and sort, I get the products returned in the same order as the very first result. example Product A is named: foo Product B is named: bar For the first search, sort by name desc. This results in: Product A Product B Now mix up the data a bit: Change names to: Product A named: bar Product B named: foo reindex verify that the index contains the changes for these two products. search Result: Product A Product B Since I changed the alphabetical order of the names, I expected: Product B Product A So I think that Lucene is caching the search results. (Which, btw, is a very good thing.) I just need to know where/how to clear these results. I've tried deleting the index files and doing an IISreset to clear the memory, but it seems to have no effect. So I'm thinking there is another set of Lucene files outside of the indexes that Lucene uses for caching. EDIT I just found out that you must create the index for field you wish to sort on as un-tokenized. I had the field as tokenized, so sorting didn't work.

    Read the article

  • Guidelines for using Merge task in SSIS

    - by thursdaysgeek
    I have a table with three fields, one an identity field, and I need to add some new records from a source that has the other two fields. I'm using SSIS, and I think I should use the merge tool, because one of the sources is not in the local database. But, I'm confused by the merge tool and the proper process. I have my one source (an Oracle table), and I get two fields, well_id and well_name, with a sort after, sorting by well_id. I have the destination table (sql server), and I'm also using that as a source. It has three fields: well_key (identity field), well_id, and well_name, and I then have a sort task, sorting on well_id. Both of those are input to my merge task. I was going to output to a temporary table, and then somehow get the new records back into the sql server table. Oracle Well SQL Well | | V V Sort Source Sort Well | | -------> Merge* <----------- | V Temp well table I suspect this isn't the best way to use this tool, however. What are the proper steps for a merge like this? One of my reasons for questioning this method is that my merge has an error, telling me that the "Merge Input 2" must be sorted, but its source is a sort task, so it IS sorted. Example data SQL Well (before merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d Oracle Well well_id well_name 123 well k 292 well c 311 well y 344 well t 439 well d 532 well j SQL Well (after merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d 6 311 well y 7 532 well j Would it be better to load my Oracle Well to a temporary local file, and then just use a sql insert statment on it?

    Read the article

  • Sorting a list in OCaml

    - by darkie15
    Hi All, Here is the code on sorting any given list: let rec sort lst = match lst with [] -> [] | head :: tail -> insert head (sort tail) and insert elt lst = match lst with [] -> [elt] | head :: tail -> if elt <= head then elt :: lst else head :: insert elt tail;; [Source: Code However, I am getting an Unbound error: Unbound value tail # let rec sort lst = match lst with [] -> [] | head :: tail -> insert head (sort tail) and insert elt lst = match lst with [] -> [elt] | head :: tail -> if elt <= head then elt :: lst else head :: insert elt tail;; Characters 28-29: | head :: tail -> if elt <= head then elt :: lst else head :: insert elt tail;; ^ Error: Syntax error Can anyone please help me understand the issue here?? I did not find head or tail to be predefined anywhere nor in the code

    Read the article

  • Getting Vars to bind properly across multiple files

    - by Alex Baranosky
    I am just learning Clojure and am having trouble moving my code into different files. I keep detting this error from the appnrunner.clj - Exception in thread "main" java.lang.Exception: Unable to resolve symbol: -run-application in this context It seems to be finding the namespaces fine, but then not seeing the Vars as being bound... Any idea how to fix this? Here's my code: APPLICATION RUNNER - (ns src/apprunner (:use src/functions)) (def input-files [(resource-path "a.txt") (resource-path "b.txt") (resource-path "c.txt")]) (def output-file (resource-path "output.txt")) (defn run-application [] (sort-files input-files output-file)) (-run-application) APPLICATION FUNCTIONS - (ns src/functions (:use clojure.contrib.duck-streams)) (defn flatten [x] (let [s? #(instance? clojure.lang.Sequential %)] (filter (complement s?) (tree-seq s? seq x)))) (defn resource-path [file] (str "C:/Users/Alex and Paula/Documents/SoftwareProjects/MyClojureApp/resources/" file)) (defn split2 [str delim] (seq (.split str delim))) (defstruct person :first-name :last-name) (defn read-file-content [file] (apply str (interpose "\n" (read-lines file)))) (defn person-from-line [line] (let [sections (split2 line " ")] (struct person (first sections) (second sections)))) (defn formatted-for-display [person] (str (:first-name person) (.toUpperCase " ") (:last-name person))) (defn sort-by-keys [struct-map keys] (sort-by #(vec (map % [keys])) struct-map)) (defn formatted-output [persons output-number] (let [heading (str "Output #" output-number "\n") sorted-persons-for-output (apply str (interpose "\n" (map formatted-for-display (sort-by-keys persons (:first-name :last-name)))))] (str heading sorted-persons-for-output))) (defn read-persons-from [file] (let [lines (read-lines file)] (map person-from-line lines))) (defn write-persons-to [file persons] (dotimes [i 3] (append-spit file (formatted-output persons (+ 1 i))))) (defn sort-files [input-files output-file] (let [persons (flatten (map read-persons-from input-files))] (write-persons-to output-file persons)))

    Read the article

  • Question about the code of the backend of symfony

    - by user248959
    Hi, this is the index action and template generated at the backend for the model "coche". public function executeIndex(sfWebRequest $request) { // sorting if ($request->getParameter('sort') && $this->isValidSortColumn($request->getParameter('sort'))) { $this->setSort(array($request->getParameter('sort'), $request->getParameter('sort_type'))); } // pager if ($request->getParameter('page')) { $this->setPage($request->getParameter('page')); } $this->pager = $this->getPager(); $this->sort = $this->getSort(); } This is the index template: <?php use_helper('I18N', 'Date') ?> <?php include_partial('coche/assets') ?> <div id="sf_admin_container"> <h1><?php echo __('Coche List', array(), 'messages') ?></h1> <?php include_partial('coche/flashes') ?> <div id="sf_admin_header"> <?php include_partial('coche/list_header', array('pager' => $pager)) ?> </div> <div id="sf_admin_bar"> <?php include_partial('coche/filters', array('form' => $filters, 'configuration' => $configuration)) ?> </div> <div id="sf_admin_content"> <form action="<?php echo url_for('coche_coche_collection', array('action' => 'batch')) ?>" method="post"> <?php include_partial('coche/list', array('pager' => $pager, 'sort' => $sort, 'helper' => $helper)) ?> <ul class="sf_admin_actions"> <?php include_partial('coche/list_batch_actions', array('helper' => $helper)) ?> <?php include_partial('coche/list_actions', array('helper' => $helper)) ?> </ul> </form> </div> <div id="sf_admin_footer"> <?php include_partial('coche/list_footer', array('pager' => $pager)) ?> </div> </div> In the template there is this line: include_partial('coche/filters', array('form' => $filters, 'configuration' => $configuration)) ?> but i can not find the variables $this-filters and $this-configuration in the index action. How is that possible? Javi

    Read the article

  • Sorting an array of Objective-c objects

    - by davbryn
    So I have a custom class Foo that has a number of members: @interface Foo : NSObject { NSString *title; BOOL taken; NSDate *dateCreated; } And in another class I have an NSMutableArray containing a list of these objects. I would very much like to sort this array based on the dateCreated property; I understand I could write my own sorter for this (iterate the array and rearrange based on the date) but I was wondering if there was a proper Objective-C way of achieving this? Some sort of sorting mechanism where I can provide the member variable to sort by would be great. In C++ I used to overload the < = operators and this allowed me to sort by object, but I have a funny feeling Objective-C might offer a nicer alternative? Many thanks

    Read the article

  • Multi-level clones with Git?

    - by Chad Johnson
    So, I'm thinking of having the following centralized setup with Git (each of these are clones): stable development developer1 developer2 developer3 So, I created my stable repository git --bare init made the 'development' clone git clone ssh://host.name//path/to/stable/project.git development and made a 'developer' clone git clone ssh://host.name//path/to/development/project.git developer So, now, I make a change, commit, and then I push from my developer account git commit --all git push and the change goes to the development clone. But now, when I ssh to the server, go to the development clone directory, and run "git fetch" or "get pull", I don't see the changes. So what do I do? Am I totally misunderstanding things and doing things wrong? How can I see the changes in the 'development' clone that I pushed from my 'developer' clone? This worked fine in Mercurial.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >